<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Francesco Bonacci</title>
    <description>The latest articles on Forem by Francesco Bonacci (@francedot).</description>
    <link>https://forem.com/francedot</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/francedot"/>
    <language>en</language>
    <item>
      <title>iOS &amp; OS Agents in the Era of Multi-Modal Generative AI</title>
      <dc:creator>Francesco Bonacci</dc:creator>
      <pubDate>Tue, 05 Mar 2024 20:18:21 +0000</pubDate>
      <link>https://forem.com/francedot/ios-os-agents-in-the-era-of-multi-modal-generative-ai-154p</link>
      <guid>https://forem.com/francedot/ios-os-agents-in-the-era-of-multi-modal-generative-ai-154p</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Przc93Mk3S8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The launch of the GPT-4V(ision) model from OpenAI in November marked a significant transformation in the development of &lt;strong&gt;OS Agents&lt;/strong&gt;. By &lt;em&gt;OS Agents (or also UI-focused Agents)&lt;/em&gt; we refer to Generative AI agents capable of navigating and utilizing the array of applications and functions available either on a host device, or any other devices they may interact with and control. Interaction can take the form of pure textual input, such as an end goal, or voice (e.g., through the Whisper APIs) from a user.&lt;/p&gt;

&lt;p&gt;Prior to November, efforts to create UI automations were mainly dependent on pure textual agents grounding the LLM with the &lt;strong&gt;Document Object Models (DOM)&lt;/strong&gt; of an app and page under testing, such as its XML or HTML representation. However, this approach often proved ineffective due to the excessive noise and voluminous information present in such representations, which often obscured the true &lt;em&gt;semantic meaning&lt;/em&gt; of the page from the perspective of the user. In contrast, GPT-4V and various other vision models offer a more effective approach by leveraging the rich semantic information embedded in the visual aspects of interactable objects. This includes understanding an object’s appearance and its spatial relationship on the page to better discern its role and predict the type of actions applicable to it (e.g., whether it is a search box, a button, or an element that can be dragged).&lt;/p&gt;

&lt;p&gt;UI-focused agents typically perform an action over a target app in two steps per individual turn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First, predicting the next probable action given the current page state. The primary methods of control involve &lt;strong&gt;UI automation&lt;/strong&gt;, including actions like tapping, typing, and scrolling within the user interface of the target system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, generating automation code by relying on accessibility IDs and XPath selectors of the underlying page DOM.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Many research papers have emerged since the release of GPT-4V, proving the effectiveness of leveraging a purely visual approach for the first prediction step and only relying on textual grounding for the latter, if applicable. Some useful research papers that extensively discuss this approach include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://arxiv.org/abs/2401.01614"&gt;GPT-4V(ision) is a Generalist Web Agent, if Grounded&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://arxiv.org/abs/2402.07939v3"&gt;UFO: A UI-Focused Agent for Windows OS Interaction&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://arxiv.org/abs/2312.13771"&gt;AppAgent: Multimodal Agents as Smartphone Users&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are some results on Windows, in terms of the completion rate for tasks across common apps using such a vision + textual agents. Credits to &lt;a href="https://github.com/microsoft/UFO"&gt;UFO&lt;/a&gt; for the benchmark.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nfzJm3Tl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4090/1%2Ahys7WaDsVhKmg0-cXcANug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nfzJm3Tl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4090/1%2Ahys7WaDsVhKmg0-cXcANug.png" alt="Performance comparison achieved by UFO on WindowsBench." width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4ydI_nsX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3766/1%2AIiGYd9ymXcBJT7A7Wqwqvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4ydI_nsX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3766/1%2AIiGYd9ymXcBJT7A7Wqwqvw.png" alt="The detailed performance breakdown across applications achieved by UFO on WindowsBench." width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Regardless of the specific focus of these studies, the methodology described can be universally applied across various application domains and operating systems. This is where &lt;a href="https://github.com/francedot/NavAIGuide-TS"&gt;NavAIGuide&lt;/a&gt; comes into play.&lt;/p&gt;

&lt;p&gt;The project’s objective is to offer a &lt;strong&gt;TypeScript&lt;/strong&gt;-based, extensible, multi-modal, and UI-focused framework that is crafted to execute plans and address user queries effectively. Designed to be cross-OS, it supports mobile platforms (iOS, Android), web, and desktop environments seamlessly. Additionally, it supports a range of vision and textual models, such as GPT-4V, Claude, LLaVA, and Open-Interpreter, to enhance its predictive capabilities and code generation processes.&lt;/p&gt;

&lt;p&gt;As of the time of writing this article, only an iOS implementation of the framework is available. To my knowledge, this is the first UI-focused agent on iOS 🥳. Contributions to backend implementations for Android and macOS are more than welcome!&lt;/p&gt;

&lt;p&gt;The framework is broken down into different packages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/francedot/NavAIGuide-TS/tree/main/packages/core"&gt;navaiguide/core&lt;/a&gt;: Exposes an unenforced set of agents for planning, predicting, and generating code selectors for each action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/francedot/NavAIGuide-TS/tree/main/packages/ios"&gt;navaiguide/ios&lt;/a&gt;: Exposes the iOS implementation of the NavAIGuideBaseAgent, along with the glue with Xcode, WDA, and Appium to make UI Automation possible on a real iOS device.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  navaiguide/core
&lt;/h2&gt;

&lt;p&gt;Let’s start by delving deeper into &lt;a href="https://github.com/francedot/NavAIGuide-TS/tree/main/packages/core"&gt;navaiguide/core&lt;/a&gt;. The NavAIGuideclass exposes three agents worth mentioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;startTaskPlanner_Agent: A simple planner that accesses the ecosystem of apps available on your device, formulating a cross-app plan to fulfill the request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;predictNextNLAction_Visual_Agent: This visual agent is responsible for analyzing the current page screenshot to predict the next probable action in terms of the type of action (tap, type, scroll), visual description, and position context (e.g., bounding box or coordinates, if supported by the model) of the target element. Additionally, the agent acts as a &lt;strong&gt;feedback loop&lt;/strong&gt; by comparing the current state of the page with the previously held action, analyzing both the previous and current page screenshots. Since GPT-4V currently lacks the capability to distinguish between images, this is achieved by drawing watermarks on the taken screenshots, as shown in the image below.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--foGOi_Mb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2A02aCKbxzj583TieYsh1Heg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--foGOi_Mb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2A02aCKbxzj583TieYsh1Heg.jpeg" alt="" width="590" height="1280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FN2mV2ow--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AfwnfcgJl7I54ViTA73g5gA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FN2mV2ow--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/1%2AfwnfcgJl7I54ViTA73g5gA.jpeg" alt="Before and After screenshot for a ‘tap’ action in NavAIGuide" width="590" height="1280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of the type of output that can be expected from this agent is as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "previousActionSuccess": false,
  "previousActionSuccessExplanation": "The previous action was not successful as the keyboard is not visible in the second screenshot.",
  "endGoalMet": false,
  "endGoalMetExplanation": "The goal of finding a coffee shop nearby hasn't been met yet.",
  "actionType": "tap",
  "actionTarget": "Search bar",
  "actionDescription": "Retry tapping the search bar with corrected coordinates.",
  "actionExpectedOutcome": "The keyboard becomes visible.",
  "actionTargetVisualDescription": "A white search bar at the top of the page with a magnifying glass icon and a placeholder text 'Search for a place or address'",
  "actionTargetPositionContext": "The search bar is located at the top of the page, just below the app's title and the search bar's placeholder text."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;generateCodeSelectorsWithRetry_Agent: This textual agent processes the natural language (NL) action provided by the previous agent, along with the Document Object Model (DOM) representation of the page (for iOS, this is an &lt;strong&gt;XCUITest XML DOM&lt;/strong&gt;), to generate code selectors. Multiple selectors can be generated in certain scenarios. When this happens, each selector is returned with an accompanying &lt;strong&gt;confidence score&lt;/strong&gt; and is subject to a &lt;strong&gt;retry mechanism&lt;/strong&gt; for enhanced accuracy. Additionally, if the Document Object Model (DOM) is too large to fit into a single request to the textual model, it is automatically divided into smaller &lt;strong&gt;chunks&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An example of the type of output that can be expected by the generateCodeSelectorsWithRetry_Agent:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "selectorsByRelevance": [
    {
      "selector": '//XCUIElementTypeButton[@name="Home"]',
      "relevanceScore": 10
    },
    {
      "selector": '//XCUIElementTypeButton[@name="Hom"]',
      "relevanceScore": 8
    },
    {
      "selector": '//XCUIElementTypeButton[@name="Search"]',
      "relevanceScore": 1
    }
  ] 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  navaiguide/ios
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/francedot/NavAIGuide-TS/tree/main/packages/ios"&gt;navaiguide/ios&lt;/a&gt; builds upon the core component to enable building AI agents that can command real iOS devices. Here’s how:&lt;/p&gt;
&lt;h3&gt;
  
  
  Some pre-requisites first:
&lt;/h3&gt;

&lt;p&gt;Some pre-requisites before running &lt;a href="https://github.com/francedot/NavAIGuide-TS/tree/main/packages/ios"&gt;navaiguide/ios&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Follow the core &lt;a href="https://github.com/francedot/NavAIGuide-TS/blob/main/packages/core/README.md"&gt;pre-requisites&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;macOS with Xcode 15.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;iOS device (simulators not supported).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developer.apple.com/programs/"&gt;Apple Developer&lt;/a&gt; Free Account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go Build Tools (currently required as a dependency for go-ios).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Appium Server with XCUITest Driver.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Steps:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. ⚡Install NavAIGuide-iOS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can choose to either clone the repository or use npm, yarn, or pnpm to install NavAIGuide.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/francedot/NavAIGuide-TS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;npm&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @navaiguide/ios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Yarn&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add @navaiguide/ios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2. Go-iOS Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go-iOS is required for NavAIGuide to list apps and start a pre-installed WDA Runner on the target device. If your device is running iOS 17, support for waking up the WDA Runner is experimental, and npm packages for go-ios are not available. Therefore, you need to install the latest version from the &lt;a href="https://github.com/danielpaulus/go-ios/tree/ios-17"&gt;ios-17 branch&lt;/a&gt; and manually build an executable, which requires installing Go build tools.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install Go build tools on macOS
brew install go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once installed, you can run this utility script to build go-ios. This will copy the go-ios executable to the ./packages/ios/bin directory, which is necessary for the next steps.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# If you cloned the repository:
cd packages/ios
npx run build-go-ios

# If installed through the npm package:
npm explore @navaiguide/ios -- npm run build-go-ios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3. Appium&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install the Appium server globally:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g appium
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Launching Appium from the terminal should result in a similar output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q8Nt-sA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4272/0%2AtTOpGvzXE6RIfZN5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q8Nt-sA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/4272/0%2AtTOpGvzXE6RIfZN5.png" alt="" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install and run appium-doctor to diagnose and fix any iOS configuration issues:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g appium-doctor
appium-doctor --ios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Install the &lt;a href="https://github.com/appium/appium-xcuitest-driver/tree/master"&gt;Appium XCUITest Driver&lt;/a&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;appium driver install xcuitest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This step will also clone the Appium &lt;a href="https://appium.github.io/appium-xcuitest-driver/4.16/wda-custom-server"&gt;WebDriverAgent (WDA)&lt;/a&gt; Xcode project, required in the next step. Check that the Xcode project exists at ~/.appium/node_modules/appium-xcuitest-driver/node_modules/appium-webdriveragent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Enable Developer Settings &amp;amp; UI Automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you haven’t already, enable &lt;a href="https://developer.apple.com/documentation/xcode/enabling-developer-mode-on-a-device"&gt;Developer Mode&lt;/a&gt; on your target device. If required, reboot your phone.&lt;/p&gt;

&lt;p&gt;Next, enable UI Automation from Settings/Developer. This will allow the WDA Runner to control your device and execute XCUITests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5cHUTobr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2A2gJbA6JrBAis3T9Q.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5cHUTobr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2A2gJbA6JrBAis3T9Q.jpeg" alt="" width="800" height="715"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. WDA Building and Signing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next step is to build and sign the Appium WDA project through Xcode.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd '~/.appium/node_modules/appium-xcuitest-driver/node_modules/appium-webdriveragent'
open 'WebDriverAgent.xcodeproj'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0YMkbmjx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3876/0%2AhNZWYk75IFFvLJa5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0YMkbmjx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/3876/0%2AhNZWYk75IFFvLJa5.png" alt="" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select WebDriverAgentRunner from the target section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Signing &amp;amp; Capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check the Automatically manage signing checkbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose your Team from the Team dropdown.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Bundle Identifier, replace the value with a bundle identifier of your choice, for example: com..wda.runner.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Building the WDA project from Xcode (and macOS) is optional if you already have a pre-built IPA, but it must be re-signed with your Apple Developer account’s certificate. For instructions on how to do this, see Daniel Paulus’ &lt;a href="https://github.com/danielpaulus/wda-signer"&gt;wda-signer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, to deploy and run the WDA Runner on the target real device:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;xcodebuild build-for-testing test-without-building -project WebDriverAgent.xcodeproj -scheme WebDriverAgentRunner -destination 'id=&amp;lt;YOUR_DEVICE_UDID&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can find your connected device UDID with go-ios.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./go-ios list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The xcodebuild step is only required once, as we will later use go-ios to wake up a previously installed WDA Runner on your device.&lt;/p&gt;

&lt;p&gt;If the xcodebuild is successful, you should see a WDA Runner app installed, and your device will enter the UI Automation mode (indicated by a watermark on your screen).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Wake Up WDA Runner &amp;amp; Run Appium Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, let’s exit UI Automation mode by holding the Volume Up and Down buttons simultaneously and ensure we can use go-ios to wake up the installation.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# If you cloned the repository:
npm run run-wda -- --WDA_BUNDLE_ID=com.example.wdabundleid --WDA_TEST_RUNNER_BUNDLE_ID=com.example.wdabundleid --DEVICE_UDID=12345

# If installed through the npm package:
npm explore @navaiguide/ios -- npm run run-wda -- --WDA_BUNDLE_ID=com.example.wdabundleid --WDA_TEST_RUNNER_BUNDLE_ID=com.example.wdabundleid --DEVICE_UDID=12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If successful, you should see the device entering UI Automation mode again. What’s changed? Using go-ios, it will technically make it possible to control the device from Linux and, soon, Windows (see the latest go-ios release).&lt;/p&gt;

&lt;p&gt;As we are not running XCUITests directly but through Appium, we will need to run the Appium Server next, which will listen for any WebDriverIO commands and translate them into XCUITests for the WDA Runner to execute.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Provided that you've installed Appium as a global npm package
appium
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;7. Run an iOS AI Agent&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the Appium Server and WDA running, we can finally run our first AI-powered iOS agent. Let’s see how to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { iOSAgent } from "@navaiguide/ios";

const iosAgent = new iOSAgent({
    // openAIApiKey: "YOUR_OPEN_AI_API_KEY", // Optional if set through process.env.OPEN_AI_API_KEY
    appiumBaseUrl: 'http://127.0.0.1',
    appiumPort: 4723,
    iOSVersion: "17.3.0",
    deviceUdid: "&amp;lt;DEVICE_UDID&amp;gt;"
});

const fitnessPlannerQuery = "Help me run a 30-day fitness challenge.";
await iosAgent.runAsync({
    query: fitnessPlannerQuery
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is all that’s needed to build UI-focused agents for iOS.&lt;/p&gt;

&lt;p&gt;I encourage you to experiment with building your own and share your experiences and insights. Your feedback and contributions are invaluable as we strive to push the boundaries of what’s possible with vision models.&lt;/p&gt;

&lt;p&gt;Stay tuned for more updates, and in the meantime, happy hacking! 🤖&lt;/p&gt;

&lt;p&gt;Curious about a cool OSS application of this tech? Check out &lt;a href="https://github.com/OwlAIProject/Owl"&gt;OwlAIProject/Owl: A personal wearable AI that runs locally&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Appendix — Pills of iOS UI Automation
&lt;/h2&gt;

&lt;p&gt;Following are some technical definitions and the glued components that make running UI-focused agents possible on iOS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/danielpaulus/go-ios"&gt;Go-iOS&lt;/a&gt;: A set of tools written in Go that allows you to control iOS devices on Linux and Windows. It notably includes the capability to start and kill apps and run UI tests on iOS devices. It uses a reverse-engineered version of the DTX Message Framework. &lt;strong&gt;Kudos to Daniel Paulus&lt;/strong&gt; and the whole OSS community at go-ios on recently adding support for iOS 17.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebDriverAgent (WDA)&lt;/strong&gt;: To automate tasks on an iOS device, installing WebDriverAgent (WDA) is required. WDA, initially a Facebook project and now maintained by Appium, acts as the core for all iOS automation tools and services. Due to iOS’s strict security, direct input simulations or screenshot captures via public APIs or shell commands are prevented. WebDriverAgent circumvents these limitations by launching an HTTP server on the device, turning XCUITest framework functions into accessible REST calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UI Automation Mode&lt;/strong&gt;: This is the state triggered on the target device when running UI Automation XCUITests. It must be enabled through the Developer Settings before installing the WDA Runner App.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebdriverIO&lt;/strong&gt;: An open-source testing utility for Node.js that enables developers to automate testing for web applications. In the context of UI automation for iOS applications, WebdriverIO can be used alongside Appium, a mobile application automation framework. Appium acts as a bridge between WebdriverIO tests and the iOS platform, allowing tests written in WebdriverIO to interact with iOS applications as if a real user were using them. This integration supports the automation of both native and hybrid iOS apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Appium&lt;/strong&gt;: An open-source, cross-platform test automation tool used for automating native, mobile web, and hybrid applications on iOS and Android platforms. Appium uses the WebDriver protocol to interact with iOS and Android applications. For iOS, it primarily relies on Apple’s XCUITest framework (for iOS 9.3 and above), and for older versions, it used the UIAutomation framework. XCUITest, part of XCTest, is Apple’s official UI testing framework, which Appium leverages to perform actions on the iOS UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QgMCwlG_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2ANHfIe5qe8g7qNMB_" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QgMCwlG_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/2000/0%2ANHfIe5qe8g7qNMB_" alt="" width="735" height="743"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ios</category>
      <category>android</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>NavAIGuide-TS</title>
      <dc:creator>Francesco Bonacci</dc:creator>
      <pubDate>Sun, 04 Feb 2024 22:55:48 +0000</pubDate>
      <link>https://forem.com/francedot/navaiguide-ts-4o5f</link>
      <guid>https://forem.com/francedot/navaiguide-ts-4o5f</guid>
      <description>&lt;h2&gt;
  
  
  🤖 NavAIGuide-TS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/francedot/NavAIGuide-TS"&gt;https://github.com/francedot/NavAIGuide-TS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xjvoFucv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/francedot/NavAIGuide/blob/main/img/logo.png%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xjvoFucv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://github.com/francedot/NavAIGuide/blob/main/img/logo.png%3Fraw%3Dtrue" width="735" height="743"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 What is NavAIGuide?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;NavAIGuide&lt;/strong&gt; (/næv eɪ aɪ ɡaɪd/) is a TypeScript Extensible components toolkit for integrating LLMs into Navigation Agents and Browser Companions. Key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Natural Language Task Detection:&lt;/strong&gt; Supports both visual (using GPT-4V) and textual modes to identify tasks from web pages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation Code Generation:&lt;/strong&gt; Automates the creation of code for predicted tasks with options for Playwright (requires Node) or native JavaScript Browser APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Grounding:&lt;/strong&gt; Enhances the accuracy of locating visual elements on web pages for better interaction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient DOM Processing and Token Reduction:&lt;/strong&gt; Utilizes advanced strategies for DOM element management, significantly reducing the number of tokens required for accurate grounding and action detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reliability:&lt;/strong&gt; Includes a retry mechanism with exponential backoff to handle transient failures in LLM calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON Mode &amp;amp; Action-based Framework:&lt;/strong&gt; Utilizes JSON mode and reproducible outputs for predictable outcomes and an action-oriented approach for task execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NavAIGuide Agents&lt;/strong&gt; extend the core toolkit with advanced automation solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Preview of Playwright-based Agents:&lt;/strong&gt; Initial offerings for browser automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-platform Appium Support:&lt;/strong&gt; Future updates will introduce compatibility with Appium for broader device coverage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NavAIGuide aims to streamline the development process for web navigation assistants, offering a comprehensive suite of tools for developers to leverage LLMs in web automation efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚡️ Quick Install
&lt;/h2&gt;

&lt;p&gt;You can use npm, yarn, or pnpm to install NavAIGuide&lt;/p&gt;

&lt;h3&gt;
  
  
  npm:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  npm &lt;span class="nb"&gt;install &lt;/span&gt;navaiguide-ts
  // With Playwright:
  npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--save-dev&lt;/span&gt; &lt;span class="s2"&gt;"@playwright/test"&lt;/span&gt;
  npx playwright &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Yarn:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  yarn add navaiguide-ts
  // With Playwright:
  yarn add &lt;span class="nt"&gt;--dev&lt;/span&gt; &lt;span class="s2"&gt;"@playwright/test"&lt;/span&gt;
  npx playwright &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  💻 Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Node.js&lt;/li&gt;
&lt;li&gt;Access to OpenAI or AzureAI services&lt;/li&gt;
&lt;li&gt;Playwright for automation capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OpenAI &amp;amp; AzureAI Key Configuration
&lt;/h3&gt;

&lt;p&gt;Configure the necessary environment variables. For example locally through &lt;code&gt;.env.local&lt;/code&gt; (requires &lt;code&gt;dotenv&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;OPENAI_API_KEY&lt;/code&gt;: Your OpenAI API key.&lt;/li&gt;
&lt;li&gt;Azure AI API keys and related configurations. Note that due to region availability of different classes of models, more than 1 Azure AI Project deployment might be required.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT4TURBOVISION_DEPLOYMENT_NAME&lt;/code&gt;: Deployment of
&lt;a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision"&gt;GPT-4 Turbo with Vision&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT35TURBO_DEPLOYMENT_NAME&lt;/code&gt;: Deployment of &lt;a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/json-mode"&gt;GPT3.5 Turbo with JSON mode&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT35TURBO16K_DEPLOYMENT_NAME&lt;/code&gt;: Deployment of
&lt;a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35"&gt;GPT-3.5 with 16k max request tokens&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT4TURBOVISION_KEY&lt;/code&gt;: GPT-4 Turbo with Vision API Key&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT35TURBO_KEY&lt;/code&gt;: GPT3.5 Turbo with JSON mode and GPT-3.5 with 16k API Key&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT35TURBO_INSTANCE_NAME&lt;/code&gt;: GPT-4 Turbo with Vision API Key Instance Name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AZURE_AI_API_GPT4TURBOVISION_INSTANCE_NAME&lt;/code&gt;: GPT3.5 Turbo with JSON mode and GPT-3.5 with 16k Instance Name&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also explicitly provide the variables as part of the constructor of the &lt;code&gt;NavAIGuide&lt;/code&gt; class.&lt;/p&gt;

&lt;h3&gt;
  
  
  NavAIGuide Agent
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;NavAIGuideAgent&lt;/code&gt; base class orchestrates the process of performing and reasoning about actions on a web page towards achieving a specified end goal.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example Playwright Agent scenario:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@playwright/test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PlaywrightAgent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;navaiguide-ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;navAIGuideAgent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PlaywrightAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;playwrightPage&lt;/span&gt;
  &lt;span class="na"&gt;openAIApiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;API_KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// if not provided as process.env&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;findResearchPaperQuery&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Help me view the research paper titled 'Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V' and download its pdf.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;navAIGuideAgent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runAsync&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;findResearchPaperQuery&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See a demo &lt;a href="https://private-user-images.githubusercontent.com/11706033/302044709-7cdd4f9f-7905-4b4a-967b-cabe34502789.gif?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDcwODgzOTAsIm5iZiI6MTcwNzA4ODA5MCwicGF0aCI6Ii8xMTcwNjAzMy8zMDIwNDQ3MDktN2NkZDRmOWYtNzkwNS00YjRhLTk2N2ItY2FiZTM0NTAyNzg5LmdpZj9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAyMDQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMjA0VDIzMDgxMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTJmYzRiYzJkYWI3ZTRjNmVjZDNmNzg0NjI2ZjY5ZTgyZTU2NzdkOWZhMjIyOWRmYzU5NWQ4MmVjM2IwOTc4YzMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.2pXIwJt5VU9uFaVjmsbrOaA98ziPa3pnwPGlGBCt3uY"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>llm</category>
      <category>gpt4v</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
