<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nino Ross Rodriguez</title>
    <description>The latest articles on Forem by Nino Ross Rodriguez (@oninross).</description>
    <link>https://forem.com/oninross</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/oninross"/>
    <language>en</language>
    <item>
      <title>Laws of UX in AR</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Tue, 12 Nov 2019 23:15:01 +0000</pubDate>
      <link>https://forem.com/oninross/laws-of-ux-in-ar-34e6</link>
      <guid>https://forem.com/oninross/laws-of-ux-in-ar-34e6</guid>
      <description>&lt;p&gt;[originally posted at &lt;a href="https://www.infiniteimaginations.co/#/article/laws-of-ux-in-ar"&gt;&lt;strong&gt;&lt;em&gt;infinite imaginations&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;As technology is improving and getting more affordable, Augmented Reality (AR) has been constantly gaining popularity. From enterprise products like Google Glass Enterprise Edition 2, Microsoft HoloLens 2 and Magic Leap, to consumer available devices like AR-ready mobile phones and browsers, it has provided users with an immersive experience through the use of AR. But at what cost? Does having an Ironman-esque interface really provide a better experience? Is AR the solution for everything since it's trending?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tARKL5wO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/pqyvzc7nxoyt1pg3a9x9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tARKL5wO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/pqyvzc7nxoyt1pg3a9x9.gif" alt="Animated gif of Tony Stark and his advanced H.U.D."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The State of AR Today
&lt;/h2&gt;

&lt;p&gt;From the word 'augmented,' a device will project or embed a digital object or information on the screen, simulating as if it was in the real world. AR comes in different forms and sizes but most of them have a few things in common:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--toe7csvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hgvd2kfeovlxdpc1ulfd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--toe7csvp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hgvd2kfeovlxdpc1ulfd.jpg" alt="Common denominators of augmented reality"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Devices must be AR ready in order to 'project' the digital object on a real environment or object.&lt;/li&gt;
&lt;li&gt; May need markers or trigger spots in order for the digital items to be embedded into reality.&lt;/li&gt;
&lt;li&gt; Apps may need to be downloaded for the digital items to be embedded into reality without the use of markers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Regardless of which form AR takes, developers have already explored and played around with the technology. Most of which are extremely brilliant ideas and were executed excellently but somehow has forgotten about the user experience side of things. At this point in time, AR is more of a tool and or a gimmick for most users. You will get users to use AR at most 5 minutes, 10 might be a stretch if they are playing an AR game like Pokemon Go. Players even have disabled the AR feature of the game to save battery.&lt;/p&gt;

&lt;h3&gt;
  
  
  London Underground Map Concept
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;How &lt;a href="https://twitter.com/hashtag/AR?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#AR&lt;/a&gt; transforms a simple train ticket into a transportation guide &lt;a href="https://t.co/SEEBKXt4Ig"&gt;pic.twitter.com/SEEBKXt4Ig&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Alexandre Dip Hannemann (@DipAlexandre) &lt;a href="https://twitter.com/DipAlexandre/status/1134031928515092482?ref_src=twsrc%5Etfw"&gt;May 30, 2019&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At first glance, it is a brilliant concept displaying information about the local subway lines and the stops of different tracks. The developer can even extend it to display where the passenger is currently and how to get to their destination. However, one has to consider the barrier of entry on how passengers would get to use this AR feature. The passenger must first and foremost have the app to display AR. Another requirement would be an internet connection, most underground subways lack a good internet connection. These could be factors on hindering passengers to use the AR feature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Philips Hue Concept
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://twitter.com/hashtag/AR?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#AR&lt;/a&gt; s true potential is in blending interactions with physical and virtual things. Here is an &lt;a href="https://twitter.com/hashtag/ARKit?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#ARKit&lt;/a&gt; experiment to control &lt;a href="https://twitter.com/hashtag/PhilipsHue?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#PhilipsHue&lt;/a&gt; made with &lt;a href="https://twitter.com/hashtag/Swift?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#Swift&lt;/a&gt; on &lt;a href="https://twitter.com/hashtag/iOS?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#iOS&lt;/a&gt;. Excited to see what &lt;a href="https://twitter.com/hashtag/WWDC19?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#WWDC19&lt;/a&gt; has to offer in ARKit &lt;a href="https://t.co/4vHkvdqdIG"&gt;pic.twitter.com/4vHkvdqdIG&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Sarang Borude (@doomdave) &lt;a href="https://twitter.com/doomdave/status/1135256401389867009?ref_src=twsrc%5Etfw"&gt;June 2, 2019&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another brilliant concept on connecting the physical and virtual world by creating swatches in the digital world, to control and change the color of smart lights such as a Philips Hue lightbulb. It is a nice idea to play around with mobile devices and see physical objects change to your liking. However, the idea and the need of pointing a mobile device on a smart light just to change the colour and intensity of the light is quite silly. The user just needs to change the settings with minimal swipes and or taps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Restaurant Menu Concept
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;📱🍷AR Restaurant Menu Concept  &lt;/p&gt;

&lt;p&gt;Finally discover what the dishes really look like and check out their reviews!&lt;br&gt;&lt;br&gt;
I created this with ARKit 2.0.&lt;a href="https://twitter.com/hashtag/ARKit?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#ARKit&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/AugmentedReality?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#AugmentedReality&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/iOS12?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#iOS12&lt;/a&gt; &lt;a href="https://t.co/A1HfTmH1EF"&gt;pic.twitter.com/A1HfTmH1EF&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;— Oscar Falmer #AR (@OsFalmer) &lt;a href="https://twitter.com/OsFalmer/status/1026532909279313921?ref_src=twsrc%5Etfw"&gt;August 6, 2018&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The idea is good and well executed. People do want to know what they are paying for in restaurants. Why not visualise the food first before buying it and look at the reviews? However, the user still needs to download the app and must be connected to the internet to view the virtual menu. This could be a barrier of entry, especially for those who are just hungry and would like to order food. It’s quite silly to hold the menu and your mobile device to see the virtual menu.&lt;/p&gt;

&lt;h2&gt;
  
  
  AR Done Right
&lt;/h2&gt;

&lt;p&gt;Not all AR projects are and will be successful despite the trending technology. There are a few critical items to consider to have an effective AR project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solving a critical issue/problem
&lt;/h3&gt;

&lt;p&gt;Trying to be the Swiss Army Knife of AR could be the downfall of the project. The idea of creating an AR project should be trying to solve a problem, making a task easier, or have a simple goal for the users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Barrier of entry
&lt;/h3&gt;

&lt;p&gt;Because most AR projects require apps to functionally run perfectly, think of how to get the users to download the app or get to the AR project. There are a lot of ways to onboard users but not all of it would be the right solution. The most simplest and most common barrier of entry is the use of QR codes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ease of use
&lt;/h3&gt;

&lt;p&gt;If a user has passed the initial barrier of entry, the AR project must be easy to use and easy to understand. Not only that the project must intuitively guide the users how to use AR, but must be able to use it with minimal instructions due to the screen real estates are quite limited. Bombarding a mobile device screen with icons and instructions on how to use the feature can discourage the user from using the app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Information overload
&lt;/h3&gt;

&lt;p&gt;This is the common pitfall for any AR project. Designers and developers may at times have gone overkill and just have dumped everything on the screen. Screen real estate is quite limited and trying to bombard the users with information could be another hindrance of the project. Only show the needed elements and or information needed on the screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  UX and AR Working Together
&lt;/h2&gt;

&lt;p&gt;There is a collection of principles that designers and developers can refer to when designing and developing user interfaces for AR called the Laws of UX. I have chosen a few of them that can be applicable for AR.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aesthetic Usability Effect
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;'Users often perceive aesthetically pleasing design as design that’s more usable.'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With a small screen real estate on mobile devices, designers have to consider how the AR project is going to work easily and intuitively for the user. One has to consider the design of the user interface and how the elements can easily portray instructions to the user without a lot of external interventions. There is a mentality that if the design is poorly done, the quality of work is directly proportional to the quality of usability and content of the product. In addition to this, AR has been interconnected with future technology. It is somewhat expected that users will be 'awed' with the AR projections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Doherty Threshold
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;'Productivity soars when a computer and its users interact at a pace (&amp;lt;400ms) that ensures that neither has to wait on the other.'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this day of technology, users have a shorter attention span and patience. Now that 5G connection is being rolled out, connection speed will be faster as users’ expectations will increase also. It is critical once the project has attracted a user’s attention, the project must be able to deliver information as soon as possible. A possible barrier here would be the ability of the user to interact with the AR, downloading, installing and or opening AR as quick as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hick’s Law
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;'The time it takes to make a decision increases with the number and complexity of choices.'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We all have seen futuristic interfaces in science fiction movies like Oblivion, Iron Man and Robocop. We all admit that it does look impressive and sleek to see a lot of elements flying around the screen. But take note: your users does not have a high processing brain to accommodate all the information on the screen. Leave the fancy user interfaces to the special effects team in movies. Instead, provide a simple and clean interface for the user to easily digest. The lesser choices the user has to make, the easier it will be for them to get on board and start using your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Miller’s Law
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;'The average person can only keep 7 (plus or minus 2) items in their working memory.'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In conjunction with the other laws of UX, Miller’s Law summarises how an interface and or user journey should be kept simple. By freeing the users’ mind from complex tasks and user journeys, this will enable them to achieve and do more with the project. Having a very long on-boarding process for example could discourage any user from continuing forward. Specially with AR, where the time from downloading the app to the time where the user can finally use it is much longer than the time of actually using it.&lt;/p&gt;

&lt;h2&gt;
  
  
  ARe you ready for the Future?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TGtQ9-j9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/mz88hg2okuzz3o42105k.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TGtQ9-j9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/mz88hg2okuzz3o42105k.gif" alt="Animated gif of Tony Stark while playing around with holograms"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Though the technology may have arrived at the comfort of our mobile devices, there is still a niche market for AR. Not every AR project will satisfy the consumers’ wants and needs and we are still far from the reality of achieving the full potential of AR due to technological restrictions. While AR has landed a special place in education, retail and training, it is slowly making its way through the web with the help of developers. Experimental projects has been circulating around the internet to showcase the possibilities of Web AR. Where there is web, there is the mobile web. And where there is the mobile web, there is the potential of tapping millions of users.&lt;/p&gt;

&lt;p&gt;As a food for thought in conclusion, there has been some technological advancements (&lt;a href="https://www.infiniteimaginations.co/#/article/vr-and-ar-in-the-mobile-web"&gt;VR and AR in the Mobile Web&lt;/a&gt;) and some are even coming back (&lt;a href="https://www.infiniteimaginations.co/#/article/are-qr-codes-making-a-comeback"&gt;Are QR Codes Making A Comeback?&lt;/a&gt;). I believe that AR can have a wider audience reach by leveraging these technological advancements in which can help decrease the barrier of entry for users. By keeping in mind the Laws of UX, that are not only applicable for design, it can help also help your AR project provide a smoother and immersive experience.&lt;/p&gt;

</description>
      <category>ar</category>
      <category>augmentedreality</category>
    </item>
    <item>
      <title>Using Artificial Intelligence to Generate Alt Text on Images</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Sun, 03 Feb 2019 19:51:25 +0000</pubDate>
      <link>https://forem.com/oninross/using-artificial-intelligence-to-generate-alt-text-on-images-3n88</link>
      <guid>https://forem.com/oninross/using-artificial-intelligence-to-generate-alt-text-on-images-3n88</guid>
      <description>&lt;p&gt;Web developers and content editors alike often forget or ignore one of the most important parts of making a website accessible and SEO performant: image alt text. You know, that seemingly small image attribute that describes an image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;img src='/cute/sloth/image.jpg' alt='A brown baby sloth staring straight into the camera with a tongue sticking out.' &amp;gt;&lt;/code&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9g7lqi69ab37jnjjtbi.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9g7lqi69ab37jnjjtbi.jpeg" alt="A brown baby sloth staring straight into the camera with a tongue sticking out."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📷 Credit: &lt;a href="https://www.huffingtonpost.com/2014/04/17/baby-sloth-compilation_n_5160060.html" rel="noopener noreferrer"&gt;Huffington Post&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you regularly publish content on the web, then you know it can be tedious trying to come up with descriptive text. Sure, 5-10 images is doable. But what if we are talking about hundreds or thousands of images? Do you have the resources for that?&lt;/p&gt;

&lt;p&gt;Let’s look at some possibilities for automatically generating alt text for images with the use of computer vision and image recognition services from the likes Google, IBM, and Microsoft. They have the resources!&lt;/p&gt;

&lt;h2&gt;
  
  
  Reminder: What is alt text good for?
&lt;/h2&gt;

&lt;p&gt;Often overlooked during web development and content entry, the alt attribute is a small bit of HTML code that describes an image that appears on a page. It’s so inconspicuous that it may not appear to have any impact on the average user, but it has very important uses indeed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Web Accessibility for Screen Readers:&lt;/strong&gt; Imagine a page with lots of images and not a single one contains &lt;code&gt;alt&lt;/code&gt; text. A user surfing in using a screen reader would only hear the word “image” blurted out and that’s not very helpful. Great, there’s an image, but what is it? Including &lt;code&gt;alt&lt;/code&gt; enables screen readers to help the visually impaired “see” what’s there and have a better understanding of the content of the page. They say a picture is worth a thousand words — that’s a thousand words of context a user could be missing.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Display text if an image does not load:&lt;/strong&gt; The World Wide Web seems infallible and, like New York City, that it never sleeps, but flaky and faulty connections are a real thing and, if that happens, well, images tend not to load properly and “break.” Alt text is a safeguard in that it displays on the page in place of where the “broken” image is, providing users with content as a fallback.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;SEO performance:&lt;/strong&gt; Alt text on images contributes to SEO performance as well. Though it doesn’t exactly help a site or page skyrocket to the top of the search results, it is one factor to keep in mind for SEO performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Knowing how important these things are, hopefully you’ll be able to include proper &lt;code&gt;alt&lt;/code&gt; text during development and content entry. But are your archives in good shape? Trying to come up with a detailed description for a large backlog of images can be a daunting task, especially if you’re working on tight deadlines or have to squeeze it in between other projects.&lt;/p&gt;

&lt;p&gt;What if there was a way to apply alt text as an image is uploaded? And! What if there was a way to check the page for missing alt tags and automagically fill them in for us?&lt;/p&gt;

&lt;h2&gt;
  
  
  There are available solutions!
&lt;/h2&gt;

&lt;p&gt;Computer vision (or image recognition) has actually been offered for quite some time now. Companies like Google, IBM and Microsoft have their own APIs publicly available so that developers can tap into those capabilities and use them to identify images as well as the content in them.&lt;/p&gt;

&lt;p&gt;There are developers who have already utilized these services and created their own plugins to generate alt text. Take &lt;a href="https://codepen.io/sdras/details/jawPGa" rel="noopener noreferrer"&gt;Sarah Drasner’s generator&lt;/a&gt;, for example, which demonstrates how Azure’s Computer Vision API can be used to create alt text for any image via upload or URL. Pretty awesome!&lt;/p&gt;

&lt;p&gt;&lt;span&gt;See the Pen &lt;a href="https://codepen.io/sdras/pen/jawPGa/" rel="noopener noreferrer"&gt;Dynamically Generated Alt Text with Azure's Computer Vision API&lt;/a&gt; by Sarah Drasner (&lt;a href="https://codepen.io/sdras" rel="noopener noreferrer"&gt;@sdras&lt;/a&gt;)on &lt;a href="https://codepen.io" rel="noopener noreferrer"&gt;CodePen&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;There’s also &lt;a href="https://wordpress.org/plugins/automatic-alternative-text/" rel="noopener noreferrer"&gt;Automatic Alternative Text&lt;/a&gt; by Jacob Peattie, which is a WordPress plugin that uses the same Computer Vision API. It’s basically an addition to the workflow that allows the user to upload an image and generated &lt;code&gt;alt&lt;/code&gt; text automatically.&lt;/p&gt;

&lt;p&gt;Tools like these generally help speed-up the process of content management, editing and maintenance. Even the effort of thinking of a descriptive text has been minimized and passed to the machine!&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Your Hands Dirty With AI
&lt;/h2&gt;

&lt;p&gt;I have managed to have played around with a few AI services and am confident in saying that Microsoft Azure’s Computer Vision produces the best results. The services offered by Google and IBM certainly have their perks and can still identify images and proper results, but Microsoft’s is so good and so accurate that it’s not worth settling for something else, at least in my opinion.&lt;/p&gt;

&lt;p&gt;Creating your own image recognition plugin is pretty straightforward. First, head down to &lt;a href="https://azure.microsoft.com/en-au/services/cognitive-services/computer-vision/" rel="noopener noreferrer"&gt;Microsoft Azure Computer Vision&lt;/a&gt;. You’ll need to login or create an account in order to grab an API key for the plugin.&lt;/p&gt;

&lt;p&gt;Once you’re on the dashboard, search and select Computer Vision and fill in the necessary details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl95m84cediwmpecqly1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl95m84cediwmpecqly1c.png" alt="Screenshot of Microsoft Azure dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for the platform to finish spinning up an instance of your computer vision. The API keys for development will be available once it’s done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8da4m8ecu8mdjoungu2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8da4m8ecu8mdjoungu2h.png" alt="Screenshot of API keys"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let the interesting and tricky parts begin! I will be using vanilla JavaScript for the sake of demonstration. For other languages, you can check out the &lt;a href="https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fe" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Below is a straight-up copy and paste of the code and you can use to replace the placeholders.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var request = new XMLHttpRequest();  
 request.open('POST', 'https://[LOCATION]/vision/v1.0/describe?maxCandidates=1&amp;amp;language=en', true);  
 request.setRequestHeader('Content-Type', 'application/json');  
 request.setRequestHeader('Ocp-Apim-Subscription-Key', '[SUBSCRIPTION_KEY]');  
 request.send(JSON.stringify({ 'url': '[IMAGE_URL]' }));  
 request.onload = function () {  
     var resp = request.responseText;  
     if (request.status &amp;gt;= 200 &amp;amp;&amp;amp; request.status &amp;lt; 400) {  
         // Success!  
         console.log('Success!');  
     } else {  
         // We reached our target server, but it returned an error  
         console.error('Error!');  
     }  
     console.log(JSON.parse(resp));  
 };  

 request.onerror = function (e) {  
     console.log(e);  
 };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alright, let’s run through some key terminology of the AI service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Location:&lt;/strong&gt; This is the subscription location of the service that was selected prior to getting the subscription keys. If you can’t remember the location for some reason, you can go to the Overview screen and find it under Endpoint. &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o1vl3jo74u4y42frp61.png" alt="Screenshot of Endpoint"&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Subscription Key:&lt;/strong&gt; This is the key that unlocks the service for our plugin use and can be obtained under Keys. There’s two of them, but it doesn’t really matter which one is used.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Image URL:&lt;/strong&gt; This is the path for the image that’s getting the alt text. Take note that the images that are sent to the API must meet specific requirements:

&lt;ul&gt;
&lt;li&gt;  Accepted formats: JPEG, PNG, GIF, BMP&lt;/li&gt;
&lt;li&gt;  File size must be less than 4MB&lt;/li&gt;
&lt;li&gt;  Dimensions should be greater than 50px by 50px&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Easy Peasy
&lt;/h2&gt;

&lt;p&gt;Thanks to big companies opening their services and API to developers, it’s now relatively easy for anyone to utilize computer vision. As a simple demonstration, I uploaded the image below to Microsoft Azure’s Computer Vision API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid4tqckjzld8ei83eozj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid4tqckjzld8ei83eozj.png" alt="a hand holding a cellphone"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The service returned the following details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{  
     'description': {  
         'tags': [  
             'person',  
             'holding',  
             'cellphone',  
             'phone',  
             'hand',  
             'screen',  
             'looking',  
             'camera',  
             'small',  
             'held',  
             'someone',  
             'man',  
             'using',  
             'orange',  
             'display',  
             'blue'  
         ],  
         'captions': [  
             {  
              'text': 'a hand holding a cellphone',  
              'confidence': 0.9583763512737793  
             }  
         ]  
     },  
     'requestId': '31084ce4-94fe-4776-bb31-448d9b83c730',  
     'metadata': {  
         'width': 920,  
         'height': 613,  
         'format': 'Jpeg'  
     }  
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there, you could pick out the &lt;code&gt;alt&lt;/code&gt; text that could be potentially used for an image. How you build upon this capability is your business:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You could create a CMS plugin and add it to the content workflow, where the &lt;code&gt;alt&lt;/code&gt; text is generated when an image is uploaded and saved in the CMS.&lt;/li&gt;
&lt;li&gt;  You could write a JavaScript plugin that adds &lt;code&gt;alt&lt;/code&gt; text on-the-fly, after an image has been loaded with notably missing &lt;code&gt;alt&lt;/code&gt; text.&lt;/li&gt;
&lt;li&gt;  You could author a browser extension that adds &lt;code&gt;alt&lt;/code&gt; text to images on any website when it finds images with it missing.&lt;/li&gt;
&lt;li&gt;  You could write code that scours your existing database or repo of content for any missing &lt;code&gt;alt&lt;/code&gt; text and updates them or opens pull requests for suggested changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Take note that these services are not 100% accurate. They do sometimes return a low confidence rating and a description that is not at all aligned with the subject matter. But, these platforms are constantly learning and improving. After all, Rome wasn’t built in a day.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>computervision</category>
      <category>javascript</category>
    </item>
    <item>
      <title>2018: The Year of Artificial Intelligence</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Tue, 01 Jan 2019 19:47:15 +0000</pubDate>
      <link>https://forem.com/oninross/2018-the-year-of-artificial-intelligence-1onm</link>
      <guid>https://forem.com/oninross/2018-the-year-of-artificial-intelligence-1onm</guid>
      <description>&lt;p&gt;[originally posted at &lt;a href="https://www.infiniteimaginations.co/#/article/2018-the-year-of-artificial-intelligence"&gt;&lt;strong&gt;&lt;em&gt;infinite imaginations&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Another year has passed by quickly — this year however, has been consistent. Artificial intelligence (AI) has been constantly making tech headlines and I have been fortunate enough to be able to play around with some of it. As technology grows and evolves at an exponential rate, what does AI have to offer for us in the future?&lt;/p&gt;

&lt;h2&gt;
  
  
  Artificial Intelligence - AI will continue to improve, for better or worst
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3kCwgk49--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/966e0tpcndxmxw7d7d8i.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3kCwgk49--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/966e0tpcndxmxw7d7d8i.gif" alt="Neural Network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;JARVIS, Skynet and Hal are a few of the well-known self-aware and sentient computer programs in science fiction. While these characters may not be far away from the distant future, companies like Google and IBM are continuously developing systems to either help us to do a particular task faster or just be better than us humans, hoping that one day robots or androids will be there for us to do the dirty work. AI has at one point has alarmed people of losing their jobs. Just relax because at this point in time, AI is only capable in excelling at very specific areas.&lt;/p&gt;

&lt;p&gt;With the aid machine learning, we will see companies utilise AI to automate tasks such as fast and accurate data learning, autonomous mobile robots, digital assistants and conversational platforms to name just a few. We first saw AI been implemented in smartphones such as Apple’s Siri and Google’s Assistant and in the recent months, we have seen a sudden surge of smart speakers being released to the consumers. AI will continue to spread like wildfire across the industry in the years to come. In 2019, we will definitely see more of AI being integrated into other devices, helping us accomplish more tasks quickly without us humans getting our hands dirty.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conversational Platforms - “Hello, how can I help you today?”
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q9awr0ag--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z405tuinxdwv3epf0kg7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q9awr0ag--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z405tuinxdwv3epf0kg7.gif" alt="Google Assistant animation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“Ok Google, Hey Siri, Alexa,” the initialising commands of Google’s, Apple’s and Amazon’s personal assistants. The platforms developed by their respective tech companies have paved way and developed speech recognition algorithms with the use of artificial intelligence and machine learning. Chatbots hit the industry and initially posed as a threat for call centres helping to take the load off the people by answering most frequently asked questions or help consumers purchase items through a conversation without the need of a human behind the wheel.&lt;/p&gt;

&lt;p&gt;During the peak of its hype cycle, I have experimented with the technology and &lt;strong&gt;&lt;em&gt;&lt;a href="https://dev.to/#/article/what-i-have-learned-from-building-a-chatbot"&gt;learned a few things about building a chatbot&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;. One of the most important lessons that I have learned is that “one should never build a chatbot just because of the hype” because this will be the failure of the project. In 2019, consumers are more likely accept these conversational platforms as tech companies continue to develop and improve these platforms. Because it is really easy to create a chatbot, we will see more of these chatbots popping out everywhere We might even be &lt;strong&gt;&lt;em&gt;&lt;a href="http://nat-ai.herokuapp.com/"&gt;conversing with one&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt; without knowing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Being online, all the time - Smartphones usage will steadily increase
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y3q6z-us--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bxb4yypenqz6o9rsx9dv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y3q6z-us--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bxb4yypenqz6o9rsx9dv.gif" alt="Timelapse of people passing by"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We all know that our smartphone is our “digital Swiss army knife” and we could not leave home without it. Browsers has been the platform to get users to the internet and tech companies are continuously improving their products to meet the demands of their loyal consumers. Besides improving the battery life, screen size, and camera quality, Google and Apple have integrated AI into their software providing an improved and smoother user experience. Not only that you have a virtual assistant with you in your phone all the time, it also learns and “predicts” the apps that you might be mostly using next at that specific time of the day.&lt;/p&gt;

&lt;p&gt;There is no stopping of browsers to improve in 2019 and the years to come. Browsers are gaining features and capabilities that are becoming more “native” and “app-like.” Machine learning has even landed on browsers and I have managed to play around with such technologies that enabled me to &lt;strong&gt;&lt;em&gt;&lt;a href="https://dev.to/#/article/identifying-objects-using-your-browser-with-tensorflowjs"&gt;detect objects using a camera and a web browser&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;. Developers are even planning to get virtual and augmented reality as a standard to web browsers in able to get more users to have an immersive experience without the need to download an app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Super Duper Hyper-personalisation - Better customer experience means more loyal customers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DUPcOxCj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yte1m2u1rusrhyl7ruyw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DUPcOxCj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yte1m2u1rusrhyl7ruyw.gif" alt="Ross doing a weird dance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I open up my browser and start browsing at Facebook or Amazon and at first I didn’t notice it, but I saw a familiar pattern: I see ads or posts that were related to my past searches. I listen to Spotify, watch videos on YouTube or movies on Netflix and I noticed the music or movies being presented were of my interest. I didn’t tell anyone nor did save any preferences. It just happened. What is this sorcery?&lt;/p&gt;

&lt;p&gt;With the advent of personalisation, it wasn’t enough for people to just read their names in emails from sites and services that they have subscribed to. That was already standard in digital marketing. Hyper-personalisation took it a step further by customising the person’s customer experience or journey by providing the things that they want or more interested in. In the years to come, hyper-personalisation with the help of AI will enable companies to provide the things their consumers want or interested at as quick as possible, providing better customer experience and making them loyal to the brand. Better customer experience means more loyal customers. More loyal customers means more money to the company. And I’m a sucker at that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Autonomous Mobile Robots - Robots running and opening doors, sh*t is getting real
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eQ7yo2_C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1600/1%2AWu2jGy1K3vgnywqllpNxDw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eQ7yo2_C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/1600/1%2AWu2jGy1K3vgnywqllpNxDw.gif" alt="Boston Dynamics Atlas Robot Does Parkour"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I always thought that drones would only be a rich-man’s toy, flying around in open spaces and take photos or videos that were impossible to be taken by anyone. With drones becoming more cheaper, they are now being utilised to transport packages, food or other goods to places. These unmanned aerial vehicles (UAV) can help deliver medical supplies to inaccessible places. Backed up with AI, these UAVs are not aided anymore by humans. They are programmed in such a way that they know what to do in case something goes wrong. They know where to go and how to come back to headquarters without human intervention.&lt;/p&gt;

&lt;p&gt;After watching Black Mirror’s episode titled Metalhead and a short video of Boston Dynamics’ robots, the future got a little bit exciting (or scarier depending on how you look at it). With drones now capable to deliver goods, a robot that can do parkour, and a dog that can open doors, it is certainly clear that the dream of having a robot or android as a helper will be coming to help us in the near future. Maybe one day, we might just have our own Threepio helping us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Round: Immersive Experience (AR, &amp;amp; VR) - Merging digital world with physical world
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xCUMfeFN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6cw2fxmj7bg36r81jc1g.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xCUMfeFN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6cw2fxmj7bg36r81jc1g.gif" alt="Lego Brickheadz AR demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Augmented reality and virtual reality has been going on for some time now. However it only has gain most of its traction in the gaming sector. The most notable AR game is Pokemon Go, where players get to “exercise” by walking around the physical world in search for Pokemons in the digital world. VR games too are attracting popularity by immersing players into another world. They can see, hear and interact in the virtual world that they are thrown into. Tech companies such as Zero Latency and VR Zone Shinjuku offer VR experiences to the public. I was lucky enough to have enjoyed and experienced VR games in Japan and I would say that it is not only fun, but a memorable experience for any gamers.&lt;/p&gt;

&lt;p&gt;AR and VR has now expanded their reach beyond the world of gaming. In the next few years, it would reach training, education and tourism sectors just to name a few. Some already has embraced the technology and provided immersive experiences to their customers. Retail companies have invested in AR booths, where their customers can try on clothes without even removing their own in a virtual world. As tech companies like Google and Mozilla developing to make AR and VR to be a standard in browsers, I believe that one day, we will one day see more &lt;strong&gt;&lt;em&gt;[immersive experiences in the mobile web&lt;/em&gt;](/#/article/vr-and-ar-in-the-mobile-web)&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping it up: The future is bright
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UHuNUqy1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/881p91c2hgd8cruc2bel.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UHuNUqy1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/881p91c2hgd8cruc2bel.gif" alt="Maverick putting on shades"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will be an exciting year for 2019 as AI, AR and VR continue to gain traction and technology gets better and cheaper. I see the next year will be the integration of these technological trends into more smart devices and wearables. We could potentially see the rise (or the comeback) of smart glasses. Google Glass wasn’t ready for the consumer market years ago probably because of the price tag, battery life or privacy concerns, but changing the approach and purpose of the smart glass might change the perception of the device.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vr</category>
      <category>ar</category>
      <category>future</category>
    </item>
    <item>
      <title>VR and AR in the Mobile Web</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Mon, 26 Nov 2018 20:03:32 +0000</pubDate>
      <link>https://forem.com/oninross/vr-and-ar-in-the-mobile-web-2iab</link>
      <guid>https://forem.com/oninross/vr-and-ar-in-the-mobile-web-2iab</guid>
      <description>&lt;p&gt;We have seen people peered into headsets to be immersed into a virtual world, seen people point their devices in all directions trying to look for something interesting around them.  VR and AR apps are already available in app stores, and this can be a hindrance to get audiences on board.  What if it was easily available in the mobile web?&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Reality
&lt;/h2&gt;

&lt;p&gt;VR is an interactive computer simulation taking the user into a virtual world.  The computer generates a simulation and tricks the mind to believe whatever the user is seeing is real.  It is usually associated with weird looking goggles, having long wires and mostly for entertainment.  It was clunky, had low definition objects and just didn’t know what you need to do in the new virtual world.  It was expensive to create, maintain and distribute to the consumer market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5ndjwzlzxysyfrfu2m4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5ndjwzlzxysyfrfu2m4w.png" alt="The Sensorama machine"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The Sensorama machine (scriptanime.wordpress.com)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With VR catching up with technology at an exponential rate, goggles and headsets are getting smaller and lighter now where people can now wear it with ease. The quality has greatly improved also, projecting more realistic and smoother experiences.  Price has also become more cheaper for the consumers.  Companies have created their own VR equipment that consumers can buy without hurting their wallets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Augmented Reality
&lt;/h2&gt;

&lt;p&gt;AR is somewhat similar to VR.  Instead of the computer generating the entire environment as a simulation, it creates virtual elements and embeds it in the real world.  Just like VR, it required weird looking goggles, wires and were still connected to a computer.  A common usage of AR for example is in heads up displays (HUD), where data or information is being displayed in transparent displays without making the users look away from the usual viewpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fw5co9monjbz9qjikoiqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fw5co9monjbz9qjikoiqs.png" alt="Virtual Fixtures – first A.R. system"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Virtual Fixtures – first A.R. system (1992, U.S. Air Force, WPAFB)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AR once had a niche market but has now expanded into more broader industries such as gaming, education and businesses.  AR is has become more popular with the success of Pokemen Go.  As technology gets better and better, tech companies like Google and Apple has invested in incorporating AR in their devices for developers to explore the different possibilities. &lt;/p&gt;

&lt;h2&gt;
  
  
  Technological Advancement
&lt;/h2&gt;

&lt;p&gt;Though it still requires a computer and other tracking devices, consumers are now able to purchase their own gear to jump into the virtual world.  Developers too have been given the opportunity to create their own content and distribute it easily in the market.  The most notable names that resonate in the market are Oculus and HTC Vive.  Sony soon joined the bandwagon, creating their own VR peripherals which was cheaper than the PC versions.  &lt;/p&gt;

&lt;p&gt;With these technological advancements, it is not impossible to see that one day VR and AR would be fully supported on our smart devices.  Tech giants have already started integrating the technology in smartphones.  Companies already have taken the advantage of this by creating rich experiences for their users.  While the developers at Mozilla and Google proposing the standardisation of WebXR API, web developers have started to create plugins to make VR and AR work for the web.&lt;/p&gt;

&lt;h2&gt;
  
  
  VR and AR in the Web
&lt;/h2&gt;

&lt;p&gt;As all developers wait for the WebXR API to be standardised and released to the public, there are a lot of other alternatives where developers can utilise to showcare VR and AR for the web.  The most popular one out there is Three.js by Ricardo Cabello (Mr.doob).  But it can be daunting for developers who have little or no experience in the WebGL.  An easier alternative would be A-Frame, a framework specifically designed to create rich VR experiences without having to know WebGL.  &lt;/p&gt;

&lt;p&gt;See the Pen &lt;a href="https://codepen.io/oninross/pen/Mvmdzg/" rel="noopener noreferrer"&gt;Hello World — A-Frame&lt;/a&gt; by Nino Ross Rodriguez (&lt;a href="https://codepen.io/oninross" rel="noopener noreferrer"&gt;@oninross&lt;/a&gt;) on &lt;a href="https://codepen.io" rel="noopener noreferrer"&gt;CodePen&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A simple VR world like the example above was created only using 17 lines of HTML code.  It works well on both desktop and mobile devices and users can dive in immediately without downloading or installing any additional software or hardware.  There is an additional icon that is present on the screen where users can choose to be in full screen on desktop and stereoscopic VR on mobile devices.  It gets more interesting on mobile devices because the device acts like the camera in the VR world which means wherever the user looks, it's the actual representation in VR.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsk4znfer1ubjx2r1frox.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fsk4znfer1ubjx2r1frox.jpeg" alt="AR.js demo displaying AR objects"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AR.js demo displaying AR objects using Hiro markers on mobile phones&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AR.js was created by Jerome Etienne, focusing on making AR for the web a reality.  Though it still relies and requires for markers to display elements on the screen, it's still a relatively big step on bringing AR to the web.  The sample above was created only using 30 lines of HTML code (a little bit larger to show 2 markers displaying 2 different objects).  Just like it's A-Frame sister, it works well on both desktop and mobile devices.  &lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of VR and AR
&lt;/h2&gt;

&lt;p&gt;Google showcased WebARonARCore at their IO event this year.  I managed to play around with the experimental technology in Chrome Canary and I must say that it looks really promising and it's paving a brighter future for VR and AR in the web.  Markerless AR on the web would make it more easier for users to get onboard and immersive without the need of downloading an app.  The floodgates will surely be wide open when once this feature has been enabled by default and is standardised.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Zu6MXyfi-Ts" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fimg.youtube.com%2Fvi%2FZu6MXyfi-Ts%2F0.jpg" alt="Google Demo of WebXR Device API"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Google Demo of WebXR Device API&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Imagine a collaborative virtual world where you only need a smartphone and you don't need to download an app, where you don’t need to buy expensive equipment to experience VR, where you can visualise the information you need anywhere, anytime.  It is truly an exciting time for the mobile web.&lt;/p&gt;

</description>
      <category>virtualreality</category>
      <category>augmentedreality</category>
      <category>webdev</category>
      <category>mobilewebdevelopment</category>
    </item>
    <item>
      <title>Are QR Codes Making A Comeback?</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Mon, 30 Jul 2018 11:21:56 +0000</pubDate>
      <link>https://forem.com/oninross/are-qr-codes-making-a-comeback-3e4k</link>
      <guid>https://forem.com/oninross/are-qr-codes-making-a-comeback-3e4k</guid>
      <description>&lt;p&gt;At some point in time, we have seen QR codes plastered all over the place, products, boxes and or posters.  It was meant to serve as a bridge from the physical world to the internet, taking users to a webpage, a YouTube video or even make your smart device perform a specific task.  The user experience imagined was to help users get to the information quickly. However, it was not the case.  Though QR codes are thought to be extinct, these squared dots refuse to die and are rising once again in different forms and use cases.&lt;/p&gt;

&lt;h1&gt;
  
  
  What ever happened to them?
&lt;/h1&gt;

&lt;p&gt;QR code system was invented to track vehicles during manufacturing and allowing high-speed component scanning.  Taking that simple idea and applying it to the web industry, it was supposed to provide users with quick and specific information.  If the QR code was on a poster about a concert, the user can simple scan the QR code and take the users to a webpage containing more information about the concert.  That is just one of the possible applications.  &lt;/p&gt;

&lt;p&gt;Though the abbreviation means “Quick Response,” there were pitfalls in the user experience of the technology.  At the time of the QR code boom, responsive web design was not yet common and viewing a website on your smartphone was literally viewing the desktop version in a tiny screen.  It was a terrible experience.  It might be one of the main reasons why users have chosen to abandon and even ignore QR codes when they are in sight.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fm8vcksdya45o063rkx6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fm8vcksdya45o063rkx6d.png" title="R.I.P. QR Code" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We barely see QR codes anymore.  Even if we did, we wouldn’t be bothered to pick up our smartphones to scan the codes.  However, it has risen from the ashes and is being used differently and others have evolved to something else.  &lt;/p&gt;

&lt;h1&gt;
  
  
  “I shall return…” - QR Code
&lt;/h1&gt;

&lt;p&gt;Typing on a small touch screen has always been a problem for most users.  Tech companies has always continued to improve keyboard typing on mobile devices ranging from auto-complete to swipe-typing.  That partially solves typing for sentences that we normally to in our day to day lives.  There are cases that a user will still need to type something out of the ordinary like a very long URL mixed with numbers and characters (need I mention both in small and big letters).  &lt;/p&gt;

&lt;p&gt;Cue in the QR Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  WeChat
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Feoaaj5puap70t50cjkdt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Feoaaj5puap70t50cjkdt.png" title="WeChat Pay" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is probably one of the mobile apps that has successfully utilised the QR code and there is no sign of stopping.  It's the one-stop-app for the Chinese audience where users can use the app to pay bills, order goods and services, transfer money to other users, and pay in stores.  Stores would have a QR code on their counters, where the user would then scan these codes.  The user would then have to key in how much they will have to pay and that’s it.  It has simplified the user journey for the customers and made the experience a seamless one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Twitter / LinkedIn
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5hv2lx0ujxsde6tcz0da.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5hv2lx0ujxsde6tcz0da.png" title="LinkedIn" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sharing contacts with people on a mobile phone can be tricky most of the time.  Especially if you heard the name differently or got a number wrong.  Twitter and LinkedIn have made it possible to easily share contacts with each other.  Both apps have their own QR code generator and reader, so users would be able to generate and scan these codes in 2 taps.  This way the person wouldn’t need to jot down the details and fumble trying to open their smartphones anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Snapcodes / Spotify Codes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3nm56190qy7hmp7i1ged.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3nm56190qy7hmp7i1ged.png" title="Spotify Codes" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the factors that confuses users about QR codes is that most of the time, users don’t know what to do with QR codes.  The blunt irony was that the QR code reader app was never installed on your device when you needed to scan, or there are no QR codes in sight when you finally have installed it.  Snapchat and Spotify have evolved QR code generating into a different level that would reflect more of their brand.  With Snapcodes having their “ghostly” logo and Spotify codes with their soundwaves, users would immediately know what app they should be using to scan these codes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Hidden little gems
&lt;/h1&gt;

&lt;p&gt;With apps like Snapchat, WeChat and LinkedIn paving the way, I believe that QR codes are making a subtle come-back in the industry.  The term quick response is now living up to it's name, where both tech giants have finally included the feature in their respective devices.  Apple made it seamless by just pointing the camera on the QR code and a preview link to tell the user what it has decoded.  Google too included it in their software.  Though it required a little bit more steps than that of Apple’s, it still gets the job done. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fqbf8zj19fcmiqqb7odkn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fqbf8zj19fcmiqqb7odkn.png" title="Scanning QR Codes" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These little gems have helped improve the user experience for everyone.  When cameras from Apple or Android scan these QR codes, it gives a preview link that has been decoded.  This way it gives a sense of security to the user that what they are about to enter is something safe.  The best part of all, it’s not a third-party app anymore.  It would be simply taking out your phone and scanning the QR code to get to the endpoint quickly. &lt;/p&gt;

&lt;p&gt;The humble QR code has come a long way from being a disjointed user experience to quietly being reintroduced back into the industry.  These black and white in one form or another will aim to help us get from the physical world to the internet.  Look out for these things, it may be your window to a virtual world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fkdc1x5leo9cz23byk34u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fkdc1x5leo9cz23byk34u.png" title="infinite imaginations" alt="alt text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ux</category>
      <category>qrcodes</category>
      <category>tech</category>
      <category>trends</category>
    </item>
    <item>
      <title>Identifying Objects Using Your Browser using TensorFlowJS</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Tue, 12 Jun 2018 08:35:59 +0000</pubDate>
      <link>https://forem.com/oninross/identifying-objects-using-yourbrowser-1i3d</link>
      <guid>https://forem.com/oninross/identifying-objects-using-yourbrowser-1i3d</guid>
      <description>

&lt;p&gt;You might be familiar with the TV show &lt;a href="https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)"&gt;Silicon Valley&lt;/a&gt; and the &lt;a href="https://www.youtube.com/watch?v=ACmydtFDTGs"&gt;“Hot Dog” episode&lt;/a&gt; where the cast created an app to simply (and yet hilariously) determine the object as a hot dog or not a hot dog. And it’s not science fiction anymore with applications, like Google Lens, rolled out to most modern smartphones. These days anyone can simply point their camera and get the information they need, quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CCXGy8i5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/g474ilocbdst7mofjyv0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CCXGy8i5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/g474ilocbdst7mofjyv0.gif" alt="Silicon Valley: Hot Dog episode"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are services like &lt;a href="https://cloud.google.com/vision/"&gt;Google Cloud Vision API&lt;/a&gt;, &lt;a href="https://aws.amazon.com/rekognition/"&gt;AWS Rekognition&lt;/a&gt; and &lt;a href="https://clarifai.com/"&gt;Clarifai&lt;/a&gt; - to name a few - that are available in the market for anyone to implement and use. Though these services let you do to more with less code, it does come with a pay-as-you-go price tag. Also, it's a generic image identifier and may have a different use case.&lt;/p&gt;

&lt;p&gt;Enter: &lt;strong&gt;&lt;em&gt;TensorFlowJS&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zh5I0Pqp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/eq6fpdrg0jqthcufidur.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zh5I0Pqp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/eq6fpdrg0jqthcufidur.jpg" alt="TensorFlow JS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's a JavaScript library released by the Google Brain Team that brings machine learning for everyone. It was originally written in Python, C++ and CUDA. Thanks to the team, they have ported it to JavaScript, where it is commonly used in the browser. Though TensorFlowJS is not exactly the same as his big brother Python version, the library is already equipped with the necessary APIs to build and train models from scratch, run TensorFlow models and retrain pre-existing models, all the convenience of your browser.&lt;/p&gt;

&lt;p&gt;TensorFlow has been circulating around my reading feeds and with Google’s recent IO event, it inspired and pushed me to get my hands dirty in machine learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Road To Discovery
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o8nJtyj_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3ycvbc8x6j8w9c0h237l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o8nJtyj_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3ycvbc8x6j8w9c0h237l.jpg" alt="Road to discovery"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Warnings from different sources suggest that, TensorFlow wouldn’t be helpful if you didn’t have any machine learning background. Python keeps popping up as the language of choice for developing in machine learning and it seemed that I needed to learn the basics in order to create to proceed. This was one of many hurdles that I encountered. But I was still determined to create my own image identifier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Warning: If you don’t have any Machine Learning background, TensorFlow is not for you
&lt;/h3&gt;

&lt;p&gt;The first step is always going back to basics and read the documentations at &lt;a href="https://js.tensorflow.org/"&gt;TensorFlowJS’s website&lt;/a&gt;. It seemed to be pretty straightforward at first, but I was wrong. More questions surfaced and I was beginning to believe the warning signs earlier. Maybe I do need to learn about machine learning before I dive into TensorFlowJS. Even scouring YouTube for tutorials and references didn’t help much. I did manage to “create” an image classifier locally in my machine but it was running in Python. I needed it to be client-side just like the &lt;a href="https://emojiscavengerhunt.withgoogle.com/"&gt;Emoji Scavenger Hunt&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Having found the &lt;a href="https://github.com/google/emoji-scavenger-hunt"&gt;repository&lt;/a&gt; to the Emoji Scavenger Hunt and hours of reverse engineering the codes to suit to my needs, I was able to finally create my own image classifier that works smoothly in the client-side.&lt;/p&gt;

&lt;h3&gt;
  
  
  Teach it as if it was like a 2-year-old
&lt;/h3&gt;

&lt;p&gt;I thought that my biggest hurdle would be developing in Python. I was developing in a Windows environment initially and it was confusing and a pain to set up. But the moment I switched to a Mac environment, everything was smooth sailing. The biggest lesson that I learned was providing the system with good data. My colleagues told me in order to have results with high accuracy, you must provide good initial data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WMq48E7Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gtg68zr81gchr1912dvw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WMq48E7Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gtg68zr81gchr1912dvw.jpg" alt="Teach it as if it was like a 2-year-old"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An analogy to simply understand how machine learning works is, teaching it by showing images as if it was like a 2-year-old, where data is the set of images and the 2-year-old is the machine learning system. For example, if you want the child to learn what an apple is, you would show the child different pictures of apples only. Nothing else should be in the picture. No other fruits, no other elements. After a certain number of pictures that the child has seen, they will be able to know when they see an apple in real-life. On the other hand, if you give the child pictures with apples and oranges, apples and bananas, apples and grapes. The child will get confused when they see those fruits together.&lt;/p&gt;

&lt;p&gt;The moral of the analogy is that the images that will be initially fed to the system for machine learning should be easy to comprehend for someone or something who doesn’t know what the subject is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Riddle Me This PWA
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KDdo9NZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/mzs2802qaetkpnsmtbkk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KDdo9NZQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/mzs2802qaetkpnsmtbkk.jpg" alt="Riddle me this"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Riddle me this riddle me that&lt;br&gt;Random riddles taken out from the hat&lt;br&gt;When you know the answer take a snap&lt;br&gt;If you are correct I will give you a clap&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The goal was to come up with my own image identifier and put it into some good use. The Riddle Me This is a PWA that will show you random riddles of common items that can be found around your home. Your challenge is to find the answer and take a picture of it. If you are correct, you proceed with the other riddles. If you are wrong, well...keep guessing.&lt;/p&gt;

&lt;p&gt;Have a go at the link below! Happy hunting!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://goo.gl/oaVLDu"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q_-lLVLU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ihzedef1w5h7cxxangnc.png" alt="Nat's Eye"&gt; https://goo.gl/oaVLDu&lt;/a&gt;&lt;/p&gt;


</description>
      <category>machinelearning</category>
      <category>tensorflowjs</category>
      <category>pwa</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Through The Looking Glass: An Overview of Visual Recognition</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Mon, 14 May 2018 07:58:05 +0000</pubDate>
      <link>https://forem.com/oninross/through-the-looking-glass-an-overview-of-visual-recognition-5e0l</link>
      <guid>https://forem.com/oninross/through-the-looking-glass-an-overview-of-visual-recognition-5e0l</guid>
      <description>&lt;h1&gt;
  
  
  Through The Looking Glass: An Overview of Visual Recognition
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fmission-impossible-face-finding.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fmission-impossible-face-finding.jpg" alt="Face finding from the movie Mission Impossible: Ghost Protocol"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s like something straight out of a Sci-Fi movie — machines, robots and androids being able to identify objects and faces with ease. But these days, early adaptations of visual recognition or image recognition technology have been made available today through services provided by the likes of Google and IBM.&lt;/p&gt;

&lt;p&gt;Let’s take a journey of how computers and devices took the first steps looking into our world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Images&lt;a href="https://en.wikipedia.org/wiki/Google_Images#Beginnings_and_expansion_.282001.E2.80.932011.29" rel="noopener noreferrer"&gt;[1]&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Do you remember the days when Google was just a search engine and all you could search for was simple text? Google developers then expanded their search product and gave users results with images. The search engine indexed millions of images and the Image Search was born.&lt;/p&gt;

&lt;p&gt;A few years later Google introduced the Search by Image feature, which allowed users to reverse image search directly into Google Search without any third-party add-ons.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fgoogle-images.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fgoogle-images.jpg" alt="Google Images"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  “There’s An App For That™”&lt;a href="http://edition.cnn.com/2010/TECH/mobile/10/12/app.for.that/index.html" rel="noopener noreferrer"&gt;[2]&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Technologies were still young and limited back then. IBM Watson was still Deep Blue playing chess. The cloud was still a dodgy place to keep your files. You pretty much relied on Google to search for anything. Image recognition apps or programs were written either by companies with their own algorithm but were too expensive to produce, or piggybacked on Google’s Image Search which was the easier and cheaper solution.&lt;/p&gt;

&lt;p&gt;Likely developed at the same time as Google Images, Google went on to release an app version of their image search feature. It was called Google Goggles[3] and it allowed users to search by taking a picture. The app also featured the ability to recognise labels or landmarks without using a text-based search.&lt;/p&gt;

&lt;p&gt;This meant people could search for virtually anything, immediately, by using their smartphones and not have the argues wait of getting back to a desktop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fgoogle-goggles.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fgoogle-goggles.jpg" alt="Google Goggles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Visual Recognition and Web Apps
&lt;/h2&gt;

&lt;p&gt;Let’s fast forward today’s date, where technology had a sudden growth spurt and gave us, users and developers, endless possibilities to play with.&lt;/p&gt;

&lt;p&gt;We now have the ability to tinker with artificial intelligence, machine learning, deep learning, natural language processing and visual recognition to name just a few. Of course, we can always hire developers with specialties in artificial intelligence and machine learning, but it would cost us an arm and a leg. There are services from IBM Watson&lt;a href="https://www.ibm.com/cloud/watson-visual-recognition" rel="noopener noreferrer"&gt;[4]&lt;/a&gt; and Google Cloud Platform&lt;a href="https://cloud.google.com/vision/" rel="noopener noreferrer"&gt;[5]&lt;/a&gt; that provide developers ease of use, all with an affordable price tag.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Than Meets The Eye
&lt;/h2&gt;

&lt;p&gt;With the possibilities being endless, here are a few examples on how we can use these technologies at our disposal:&lt;/p&gt;

&lt;h3&gt;
  
  
  “Eye See It” App
&lt;/h3&gt;

&lt;p&gt;There may be a time our visually challenged friends may need assistance in identifying an object or reading small text. With an app installed on their smartphones, it would be as easy as taking a picture and the app, replying to the user with voice, telling the users what it has seen. A user could take a picture of an unknown object to them and could reply, “I am most definitely looking at a sports car”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fcentenario.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fcentenario.jpg" alt="Centenario Lamborghini"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  “Plant-Eye-tion” App
&lt;/h3&gt;

&lt;p&gt;A possible idea for the plant lovers out there who may not be as well versed as our gardener friends. Gardeners would be a walking encyclopedia of knowledge of plants, carrying information about the plants they take care of. What if we had an app that by taking a picture of the flower or plant, it would give you the name of the picture taken? Not only that, would it give you the full details of the flower/plant, it could help you keep the thing alive sharing how many times per day it needs to be waters and weather it prefers sun or shade… it can turn the hobbyist into an expert sharing the information relevant to your surroundings.&lt;/p&gt;

&lt;h3&gt;
  
  
  “Eye Keeper”
&lt;/h3&gt;

&lt;p&gt;Another possibility is curating relevant and safe content to your social media wall in real time. Think about it, you’re at an event and there is a big digital wall that aggregates the pictures taken by the people and its uploaded in social media or in the servers. Currently, you have two choices, you have dedicated team members monitoring and curating the content or you give up full control and allow every post with the relevant tagging to be posted to the wall. One costs a lot of time and money, the other is a huge risk.&lt;/p&gt;

&lt;p&gt;The visual recognition service can act as a gatekeeper, analysing the images before it actually reaches the live screen, automatically preventing any inappropriate images being displayed in the big screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Can Show You The World...
&lt;/h2&gt;

&lt;p&gt;Artificial intelligence has taken its first few steps to further see into our world. In the near future, we might not need machine learning and deep learning anymore to “see” and “learn.” Google Lens&lt;a href="https://en.wikipedia.org/wiki/Google_Lens#App" rel="noopener noreferrer"&gt;[6]&lt;/a&gt; already took the first step in maximising the potential of the combination of visual recognition, deep learning and augmented reality.&lt;/p&gt;

&lt;p&gt;I too, took a step in trying to see what this technology can do. Have a play at our little demo below. I highly suggest that you use your mobile phone for scanning images. Bear in mind though, that this is only a proof of concept. You might still get weird replies from the web app. Happy clicking!&lt;/p&gt;

&lt;p&gt;&lt;a href="http://adel.ph/eye" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthrough-the-looking-glass%2Fqrcode.png" alt="Nat's Eye"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>imagerecognition</category>
      <category>visualrecognition</category>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>What I Have Learned From Building A Chatbot</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Sun, 06 May 2018 21:37:29 +0000</pubDate>
      <link>https://forem.com/oninross/what-i-have-learned-from-building-a-chatbot-2hl5</link>
      <guid>https://forem.com/oninross/what-i-have-learned-from-building-a-chatbot-2hl5</guid>
      <description>&lt;h1&gt;
  
  
  What I Have Learned From Building A Chatbot
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What’s with the hype?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fconversational-ui.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fconversational-ui.gif" alt="Chat bubbles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Technology nerds and enthusiasts have always dreamed of having a conversation with an artificial intelligence or AI. The living embodiment of the perfect AI would be JARVIS from the Iron Man movies. No keyboards, no mouse, no stylus. Just your voice, to have a conversation with your virtual personal assistant to do work for you.&lt;/p&gt;

&lt;p&gt;But that is science fiction. AI is still in its infancy and it has a long way to go to reach maturity to beat the Turing test.&lt;/p&gt;

&lt;p&gt;When Siri came out in the iPhone, it was the first digital personal assistant made for users. I was amazed how it instantly recognizes your requests using voice and comes back with a reply. A few years later, Google Assistant came out not only on smart devices, but in smart speakers as well. It was the first virtual personal assistant that has a two-way conversation. Then there was a sudden boom of chatbots.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a chatbot?
&lt;/h2&gt;

&lt;p&gt;A chatbot or a conversational UI is any interface that imitates chatting with a real human. It can be as simple as a chat window in a website or as complex interacting with an AI in a smart device. Whatever the medium may be if there is a two-way conversation, you are interacting with a chatbot.&lt;/p&gt;

&lt;p&gt;There are a few types of conversational UIs in the industry, flow type, AI type, and hybrid type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fflow-type.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fflow-type.jpg" alt="Flow type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flow type&lt;/strong&gt; is a tree-based kind of interaction, where the user is presented with choices and driven through a specific path. This path is pre-defined by the developer and can only “go” where the interface tells the user to go. An example here would be like the Choose Your Own Adventure books.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fai-type.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fai-type.gif" alt="AI type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI type&lt;/strong&gt; relies on artificial intelligence where the user can freely engage and have a real conversation. Something like Google Assistant, Siri and Cortana, there is an AI driving behind all the conversation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fhybrid-type.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fhybrid-type.jpg" alt="Hybrid type"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid type&lt;/strong&gt; is the most common type of conversational UI and this is where chatbots come in. It’s a combination of flow type and AI where the users are driven through a specific path while they can engage with the chatbot in a conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do you want to build a chatbot?
&lt;/h2&gt;

&lt;p&gt;As a developer, you would only think that it would be as easy as getting code from the internet and deploy to a server. There was more to it than lines of code and pixels of art.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 1: “Don’t build one because of the hype”
&lt;/h3&gt;

&lt;p&gt;It was my first mistake and I believe that it’s the golden rule in creating chatbots. Just because its trending now means that one should get into the band wagon and hoping for it to do its job. What I did was basically got Nathan Mk I off the shelf and got my friends to test it. Because it didn’t have a sole purpose, the users who tested it assumed that they were “talking” to JARVIS.&lt;/p&gt;

&lt;p&gt;In short, the chatbot should have a purpose. Introduce the chatbot to your audience. Tell them what is his purpose and what can he do. This way you are setting your audiences’ expectations to the level of your chatbot’s capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 2: Map the user journey
&lt;/h3&gt;

&lt;p&gt;After concluding that the chatbot’s purpose should be my digital portfolio tour guide, I thought to myself: “how hard can it be? I have an online portfolio, there is my user journey.” I “programmed” Nathan Mk II to capture certain keywords and reply to them accordingly. However, I faced the problem of the possibility of users going to a different section of the site if they wanted to. Connecting them all together was troublesome. I got lost in my mind of how many different permutations can a user go from point A to wherever they wanted to. On top of that, there were a lot of holes that I didn’t foresee. It felt like it never ended.&lt;/p&gt;

&lt;p&gt;Lesson learned, mapping the user journey will make a developer’s life easier when it’s time to program the chatbot. You will also foresee any holes that needed plugging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 3: Build the script
&lt;/h3&gt;

&lt;p&gt;Since I didn’t map out the user journey properly, I thought I could write the script on the fly. Again, “how hard can it be? I have my website, there’s my script.” Copy and paste should do it. As I progressed building my chatbot, I found it hard to come up with good replies and answers. It was either to broad, boring or had open-ended questions. This strayed the users from telling the chatbot the right keyword. ‘&lt;/p&gt;

&lt;p&gt;You need to lead your users. Guide them on what actions can be used to progress further. If you can’t come up with a meaning script, hire a copy writer. Solving logic is one thing and having a meaning conversation is another.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 4: Add some flavor to it
&lt;/h3&gt;

&lt;p&gt;Chatbots are still an inanimate object and text conversation can get boring quickly. However, it will say to whatever you program it to say. You can always mix it up with animated GIFs, photos, emojis, etc. (if applicable) just to keep the conversation interesting and fun. Give it a little personality as well. Giving the chatbot a little personality will make the users remember it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lesson 5: “I need a human”
&lt;/h3&gt;

&lt;p&gt;After endless hours of using and testing, I got used to navigating around the site using Nathan. I deployed it and let my friends try it. In the end, users got stuck in an infinite loop of “I don’t know” replies. That could mean bad experience for the user. Instead of trapping your users in limbo, make sure that if your user is stuck at most 3 “I don’t know,” give the option to talk to a human. We need to make our users happy by reaching their objective.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nathan Mk III
&lt;/h2&gt;

&lt;p&gt;It was a good learning experience in creating Nathan and I had a lot of fun exploring this emerging technology. I learned that planning is essential because this is the backbone of the entire chatbot project. Just like any project that you may encounter, we need plan for the project, foresee the issues that may arise, and ask the questions needed. If planning is done properly, it would be smooth sailing from there onwards.&lt;/p&gt;

&lt;p&gt;Chatbots are popping everywhere and uses natural language to communicate with their users. We can further enhance the user experience by utilizing voice to communicate. Voice interactions and voice user interface will be the next big thing in the industry. Here is a demo of what came out of my learning experience. Have a play!&lt;/p&gt;

&lt;p&gt;&lt;a href="http://adel.ph/nat" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Fassets%2Fadelphi%2Fimages%2Farticles%2Fnathan-ai-qr-code.png" alt="Nathan AI Mk III"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>chatbot</category>
      <category>development</category>
      <category>ai</category>
      <category>technology</category>
    </item>
    <item>
      <title>Service workers has finally landed in iOS!  Now what?</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Wed, 02 May 2018 21:59:10 +0000</pubDate>
      <link>https://forem.com/oninross/service-workers-has-finally-landed-in-ios--now-what-110o</link>
      <guid>https://forem.com/oninross/service-workers-has-finally-landed-in-ios--now-what-110o</guid>
      <description>&lt;h2&gt;
  
  
  What is so important about iOS 11.3?
&lt;/h2&gt;

&lt;p&gt;There is a lot of updates that have been brought to the users. Most of them are bringing better experiences to the user like the new AR experiences, Animoji and the battery fix that has been plaguing iPhone users with the 11.2 version. There is, however, one feature that has made frontend developers all hyped up that is not mentioned in Apple news and blogs – the arrival of service workers.&lt;/p&gt;

&lt;p&gt;On December 20, 2017, WebKit tweeted the release notes for the Safari Technology Preview and Service Workers were enabled by default.&lt;/p&gt;

&lt;p&gt;What did this mean? Progressive Web Apps (PWA) are coming to iOS devices! Service Workers are the heart of every PWA. For months, developers have patiently waited for the service workers to arrive officially in iOS devices. We all hoped for the release during the March event, but wasn't even mentioned.&lt;/p&gt;

&lt;h2&gt;
  
  
  The silent release
&lt;/h2&gt;

&lt;p&gt;I gave up hope when Twitter-verse was still complaining about the battery issue and shouting out at Apple to drop the update already. A few days later, they did drop the update without any big news. I grabbed an updated iPhone to see what features are available and visited &lt;a href="https://whatwebcando.today/" rel="noopener noreferrer"&gt;whatwebcando.today&lt;/a&gt; to check the features and this is what I saw:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fservice-workers-on-ios%2Fwhatwebcandotoday.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fservice-workers-on-ios%2Fwhatwebcandotoday.png" alt="WhatWebCanDoToday Feature Table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  ✔️ Offline Storage&lt;/li&gt;
&lt;li&gt;  ✔️ Offline Mode&lt;/li&gt;
&lt;li&gt;  ❌ Local Notifications&lt;/li&gt;
&lt;li&gt;  ❌ Push Messages&lt;/li&gt;
&lt;li&gt;  ❌ Home Screen Installation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the most important features that can give a seamless experience for both Android and iOS. These features are already enabled by default in Android to give that “app-like” experience. We are now just waiting for iOS to play catch-up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are these features important for PWA?
&lt;/h2&gt;

&lt;p&gt;The core pillars of a PWA are Reliable, Fast and Engaging. These pillars enhance the user experience on both mobile and desktop sites.&lt;/p&gt;

&lt;p&gt;Being reliable means that when it is launched from the user’s home screen, it will load instantly regardless of the network state. There will be no “down time” and will never see the downasaur. The PWAs will install on the user’s home screen (Home Screen Installation) and cache (Offline Storage/Mode) – the necessary assets to bring an optimal experience without searching through the seas of apps in the app store.&lt;/p&gt;

&lt;p&gt;Engaging means that the PWAs feel like a natural app on the device and is installable on the user’s home screen (Home Screen Installation) without the need of an app store. On top of that, push notifications (Local Notifications and Push Messages) help users re-engage with the site. These push notifications were once exclusive to apps, now it has arrived to the mobile web.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what can a PWA do and not do in iOS?
&lt;/h2&gt;

&lt;p&gt;There is only little that you can do for now with only Offline Caching available for iOS. I have managed to tinker around with some of the PWAs that I have developed on iOS. Here are my findings:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✔️ Offline Caching
&lt;/h3&gt;

&lt;p&gt;Hurray! The first step of a PWA has landed on iOS. With this feature, the service worker will cache the necessary assets for offline usage or when the network is not reliable. This will launch the PWA (once installed) quicker than usual keeping the users engaged and not drop off. This is helpful for any static or brochure type apps where a network connection could be crappy. Once installed, the user can browse through the app without relying too much on the network.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌Home Screen Installation
&lt;/h3&gt;

&lt;p&gt;This one is a deal-breaker for me. One of the features that I like about PWA is letting the users know that they can “install” the PWA on their home screen with a tap of a button. This is not yet implemented on iOS devices and hopefully we will see this in the future. A work-around for this is to create “Add to home screen banner” for iOS devices. It will give simple instructions on how to add the PWA to the home screen.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✔️/❌Offline Mode
&lt;/h3&gt;

&lt;p&gt;Once the user has added the PWA to the home screen, the device spins up another instance of the PWA. This means that if the user has launched the PWA from the home screen when offline or in a crappy network, it will load the PWA again from scratch and cache it again. Not only is it troublesome – it's not a good user experience for iOS users.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌Local Notifications / Push Messages
&lt;/h3&gt;

&lt;p&gt;If this feature manages to land in iOS devices, it might be the death of native apps. This enables users to receive notifications on their mobile devices without the need of installing an app and let the users engage quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apple needs to play catch-up
&lt;/h2&gt;

&lt;p&gt;Since the launch of the iPhone 3Gs, we have always held high expectations from Apple. With Apple lagging behind in web technologies, they must catch-up with the latest trends. We developers will have to be a little bit more patient in waiting for more service worker features. It will get there, we didn’t actually think that service workers would land in iOS because it might be the cause of death of their App Store.&lt;/p&gt;

&lt;p&gt;It’s a start. The rest will eventually follow&lt;/p&gt;

</description>
      <category>serviceworkers</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>pwa</category>
    </item>
    <item>
      <title>The Art of Minimalism with UX</title>
      <dc:creator>Nino Ross Rodriguez</dc:creator>
      <pubDate>Mon, 30 Apr 2018 21:52:48 +0000</pubDate>
      <link>https://forem.com/oninross/the-art-of-minimalism-with-ux-4ppd</link>
      <guid>https://forem.com/oninross/the-art-of-minimalism-with-ux-4ppd</guid>
      <description>&lt;p&gt;Minimalism is on the rise — but what is it? Is it the style of art that can be found in architecture, paintings, sculptures and design that eliminates all non-essential forms or features? Or is it a form of lifestyle where you declutter your life from all unnecessary things.&lt;/p&gt;

&lt;p&gt;Regardless of what minimalism is, it shares one common denominator — “eliminating the not needed and removing all distractions.” I have found out how minimalism has helped improve user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Journey to Minimalism
&lt;/h2&gt;

&lt;p&gt;I was hooked with minimalism before I fully understood the core concepts of it. My &lt;a href="https://www.infiniteimaginations.co/#/hello/" rel="noopener noreferrer"&gt;online portfolio&lt;/a&gt; was due for a redesign and I wanted it to have a simple look and feel with a timeless design. I have tried to apply it and the result was being featured in Web Designer Depot as one of &lt;a href="https://www.webdesignerdepot.com/2017/01/the-best-new-portfolio-sites-january-2017/" rel="noopener noreferrer"&gt;The Best New Portfolio Sites, January 2017&lt;/a&gt;. It received the following comment:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Infinite imaginations combines a bit of a “techy” style with &lt;strong&gt;minimalism&lt;/strong&gt;, understated animation, and the periodic table. I’m not even kidding about that. While the design does have its (miniscule) flaws, its &lt;strong&gt;reserved sense of style is both appealing and kind of relaxing&lt;/strong&gt;.” - Ezequiel Bruni (Web Designer Depot)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I experimented with minimalism once more in one of my personal projects. The project was supposed to be for a beacons experiment that resulted into something different. It’s a progressive web app (PWA) that tells the users what time will the next bus will be arriving. I left the project running for some time and received the following as one of the comments:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Great app. Really really like it, especially after going onto the nxtbus website every time I had to check if I hadn't missed my bus already and also as a regular bus user. Also very &lt;strong&gt;aesthetically pleasing&lt;/strong&gt;. Good job guys 👍”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finally, I watched a documentary in Netflix titled “Minimalism: A Documentary About the Important Things.” I didn’t read the entire title, all I read was Minimalism and I watched it with the premise that I will be learning minimalism in art and design. Five minutes through the video, I was in for a different ride. The documentary took their viewers by showing minimalists from different walks of life striving to live a meaningful life with less. I discovered that minimalism can be also applied in a person’s lifestyle.&lt;/p&gt;

&lt;p&gt;It got me to think further. Maybe minimalism can be applied everywhere — in a person’s lifestyle, in design and in development. The basic principles share one common factor, simplicity helps people improve one’s well-being. If minimalism can help improve a person’s well-being, maybe it can also help improve user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  User Experience Today
&lt;/h2&gt;

&lt;p&gt;People are already plagued with different distractions of different types everyday. Just think about our smart devices. We can receive hundreds of notifications from social media, media and apps telling us to greet our friend a happy birthday, this person you have been following has bought a new shirt or an app just telling you to get up off your seat and walk around. There is the internet with a high-speed connection where one can open up multiple tabs, scrolling endlessly through pages and pages of clickbaits. Whatever kind of distraction this is, it takes away the user’s focus the main task at hand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Restaurant Kiosks
&lt;/h3&gt;

&lt;p&gt;One of the worst user experiences I have had with interfaces is eating at restaurants with iPads as their ordering menu. Some restaurants think that by replacing a server with an iPad will be more efficient. In some cases, this might be true. But not always, such as the interface like the one below...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FiPad-menu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FiPad-menu.jpg" alt="iPad Oredering Menu"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This image is definitely being too kind to my experience. The image above is already decent (and I couldn’t find an image of an iPad with a cluttered interface). The one we saw had a lot of “ads” selling what the restaurant has and it was difficult to find the food that we wanted. Just because having an iPad in a restaurant doesn’t make it high-tech or in trend. Management needs to also consider how the users will be using it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Apps and Websites
&lt;/h3&gt;

&lt;p&gt;Countless times I have seen apps and websites that are just painful, to the point that I wish I could unsee all of them. Entire sites or apps that has too much clutter in the page, distracting the users from performing the main task.&lt;/p&gt;

&lt;p&gt;There came a time when building apps was the solution to everything. Trying to fit every single functionality in a 4-inch display is a must, giving everything to the user. Like the example below. It has too many features bombarding the user overwhelming them on what or how to use the app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2Fmotionx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2Fmotionx.jpg" alt="Motion X iPhone App"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another example as one of the most infamous websites is every designers’ nightmare, &lt;a href="https://www.lingscars.com/" rel="noopener noreferrer"&gt;Ling’s Cars&lt;/a&gt;. It just breaks almost every rule in design, specially with trying to give focus to the users’ needs. It is not the prettiest site (pun intended). It just confuses all the users of what this site is actually doing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2F%2Farticles%2Fthe-art-of-minimalism-with-ux%2Flingscars.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2F%2Farticles%2Fthe-art-of-minimalism-with-ux%2Flingscars.jpg" alt="Ling's Cars"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Laws of UX
&lt;/h2&gt;

&lt;p&gt;There is a collection of principles that designers and developers can consider when designing and developing user interfaces called the &lt;a href="https://lawsofux.com/" rel="noopener noreferrer"&gt;Laws of UX&lt;/a&gt;. I have handed picked a few of them that correlates with minimalism.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2Fcover.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2Fcover.gif" alt="Laws of UX"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Fitts’s Law
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;“The time to acquire a target is a function of the distance to and size of the target.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FFitts%27s-Law.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FFitts%27s-Law.gif" alt="Fitts's Law"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Call to actions are the elements we want our users to click on, usually buttons with a contrasting colour that has short clear text of what will happen if they click on it. We just have to help our users find and select these elements easily by designing an interface that has a clean and easy to understand as one of the characteristics of minimalism is clarity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hick’s Law
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;“The time it takes to make a decision increases with the number and complexity of choices.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FHick%27s-Law.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FHick%27s-Law.gif" alt="Hick's Law"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Complicated interfaces will just overwhelm the users, hindering or even make mistakes along the process. We just have to make it simple for our users. Determine what the end goal is and help the users attain that objective the easiest way possible. Another characteristic of minimalism is removing all the clutter or removing non-functional decorative elements. This way, the interface can immediately give focus to the users what is the next action. The rule of thumb here is “if it doesn’t serve a purpose for the user to reach their end goal, get rid of it.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Jakob’s Law
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;“Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FJakob%27s-Law.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fadelphi.digital%2Farticles%2Fthe-art-of-minimalism-with-ux%2FJakob%27s-Law.gif" alt="Jakob's Law"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Learning something new could be a fun or painful experience for any user. Minimalism focuses on the functionality of every element, ensuring that the element is understood effortlessly. By presenting the user with familiar elements or common design patterns that can be found in other websites or everyday interfaces, it simplifies the learning process for the users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Miller’s Law
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;“The average person can only keep seven (plus or minus two) items in their working memory.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Complexity can be an exhausting job for a person’s memory especially with today’s everyday distractions. Minimalism is often associated with simplicity. By simplifying the interface and or process, it helps the user have an easy and effective means of achieving their goals. Limit the actions or things the users have to commit to memory in order to finish a task.&lt;/p&gt;

&lt;h2&gt;
  
  
  KISS Principle
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;“Overload, clutter, and confusion are not attributes of information, they are failures of design.” - &lt;a href="http://adage.com/article/adagestat/edward-tufte-adagestat-q-a/230884/" rel="noopener noreferrer"&gt;Edward Tufte&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;People will always be attracted by bright colours and flamboyant designs. “The bigger the better” they always say. They are already flooded enough with their own distractions so let’s make it easier for them to accomplish their main task. We may not know it but subconsciously, we tend to look for the simple solution. Minimalism is a philosophy, a movement, a lifestyle. No matter what you want to call it or how you want to use it, minimalism promotes the removal of unnecessary elements, keeping everything simple without losing meaning and clarity. Always take a step back and remember, &lt;em&gt;"Keep It Simple, Stupid"&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interaction Design Foundation
&lt;/h2&gt;

&lt;p&gt;IDF is an independent nonprofit initiative with an objective to &lt;em&gt;“Raise global design education to an Ivy League standard, while at the same time reduce costs to as low as possible.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;IDF made me an Educational Partner. &lt;a href="https://www.interaction-design.org/invite?ep=nino-ross-rodriguez" rel="noopener noreferrer"&gt;Sign up with my link&lt;/a&gt; to get 3 months of free membership and start learning great UX design.&lt;/p&gt;

&lt;p&gt;Are you interested or new to UX design? &lt;a href="https://www.interaction-design.org/ebook?ep=nino-ross-rodriguez" rel="noopener noreferrer"&gt;Get The Interaction Design Foundation’s free ebook!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Visit The Interaction Design Foundation for more &lt;a href="https://www.interaction-design.org/literature/topics/ux-design" rel="noopener noreferrer"&gt;UX design articles&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Subscribe to &lt;a href="https://www.interaction-design.org/newsletter?ep=nino-ross-rodriguez" rel="noopener noreferrer"&gt;The Interaction Design Foundation Newsletter&lt;/a&gt; to get weekly high quality educational materials.&lt;/p&gt;

</description>
      <category>ux</category>
      <category>minimalism</category>
      <category>design</category>
      <category>development</category>
    </item>
  </channel>
</rss>
