<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Iflexion</title>
    <description>The latest articles on Forem by Iflexion (@iflexion).</description>
    <link>https://forem.com/iflexion</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/iflexion"/>
    <language>en</language>
    <item>
      <title>Your Quick Guide to Custom Construction Software</title>
      <dc:creator>Iflexion</dc:creator>
      <pubDate>Tue, 26 Nov 2019 13:58:23 +0000</pubDate>
      <link>https://forem.com/iflexion/your-quick-guide-to-custom-construction-software-2ad8</link>
      <guid>https://forem.com/iflexion/your-quick-guide-to-custom-construction-software-2ad8</guid>
      <description>&lt;p&gt;In recent years, technology has been increasingly adopted in almost every industry; from oil and gas to retail and manufacturing. One of the most recent sectors to welcome technological solutions is the construction industry. While many companies still find it hard to incorporate digital innovation into their physical operations, the trend of embracing IT solutions in construction is accelerating at a rapid pace. &lt;/p&gt;

&lt;p&gt;In this article, we’ll take a look at the latest tech trends in construction, and, in particular, how software solutions can boost this industry. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Construction Industry Today
&lt;/h2&gt;

&lt;p&gt;At one time, the world’s tallest building was the Empire State Building in New York City at 381m, as such heights had never been achieved before. It held this title from 1931-1970. Now it comes in at No. 43.&lt;/p&gt;

&lt;p&gt;In 1971, it was surpassed by the World Trade Center, which stood at 417m before it collapsed in 2001. But even before it fell, it lost its title to Willis Tower, which took the record in 1973 with a height of 442.1m.&lt;/p&gt;

&lt;p&gt;Jumping forward twenty years, and a number of title changes; in 2019, the honor has moved from the US to Dubai, where the Burj Khalifa holds the title standing proudly at 829.8m, almost 1km.&lt;/p&gt;

&lt;p&gt;Compelled by the drive to go higher and higher, the construction industry has engaged new materials and processes to create these giants. Each in its own time was an innovation. Today, that compulsion for development has grown to include innovating internal processes to facilitate more impressive and more efficient construction developments. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Lies Ahead for the Construction Industry?
&lt;/h2&gt;

&lt;p&gt;Let’s take a look at some of the innovations that construction businesses are adopting in 2019. &lt;/p&gt;

&lt;h3&gt;
  
  
  Augmented Reality (AR)
&lt;/h3&gt;

&lt;p&gt;Commonly thought to be confined to the world of gaming, AR solutions are branching out across all industries. They are moving away from their entertainment-only purpose and focusing on changing the world as we know it. &lt;/p&gt;

&lt;p&gt;In the construction industry, they present a number of advantages such as: &lt;/p&gt;

&lt;p&gt;• Creating digitized comparisons by tracking the progress of a real-world construction project against the initial models.&lt;/p&gt;

&lt;p&gt;• Allowing investors or other interested parties to tour building sites in safety and from many miles away.&lt;/p&gt;

&lt;p&gt;• Aids in developing a floor plan, including measurements, evenness of surfaces and defects, which simplifies the construction process. &lt;/p&gt;

&lt;h3&gt;
  
  
  Drones
&lt;/h3&gt;

&lt;p&gt;From surveying land to facilitating communication on sites, drones are one of the latest technologies increasingly used in construction. Aside from seeing how a site is progressing and easily delivering this information to clients, the latest drone technology is becoming more accurate in taking measurements, which can be vital in seeing how a project is really shaping up. &lt;/p&gt;

&lt;h3&gt;
  
  
  Robotics
&lt;/h3&gt;

&lt;p&gt;This tech area has long been in use in car production lines; however, its role in the construction industry has been considerably less visible. That said, building companies are now seeking to embrace similar robotic solutions as their vehicle counterparts with such robotic technologies as bricklaying robots, demolition bots, and even 3D printing bot solutions, that will innovate and increase safety across constructions sites. &lt;/p&gt;

&lt;h3&gt;
  
  
  Back-office software
&lt;/h3&gt;

&lt;p&gt;While in the past the use of software by construction companies was confined to standalone task management, billing, and administration, as we move further into 2019 all that is set to change. &lt;/p&gt;

&lt;p&gt;Building companies are increasingly seeking the &lt;a href="https://www.iflexion.com/services/enterprise-software-development"&gt;enterprise software development services&lt;/a&gt; of custom construction software providers to optimize operations and develop innovations that help them more efficiently manage their internal processes for greater results. &lt;/p&gt;

&lt;p&gt;These solutions have the potential to:&lt;/p&gt;

&lt;p&gt;• Address the needs of various actors in construction projects, including staff, managers, and clients, and improve collaboration and communication;&lt;/p&gt;

&lt;p&gt;• Control various stages of construction and renovation projects with workflow and task management tools;&lt;/p&gt;

&lt;p&gt;• Help to make better decisions, aided with analytics and reporting.&lt;/p&gt;

&lt;p&gt;In addition, custom software solutions provide a company with the tools to combine any of the previously mentioned tech–– robotics, AR, and drones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases of Custom Construction Software
&lt;/h2&gt;

&lt;p&gt;A solution in its own right, custom construction software forms one of the cornerstones of progress in the building industry. &lt;/p&gt;

&lt;p&gt;It is instrumental throughout the entire construction process, from project conception to billing. Here’s how it can have an impact at each of the stages. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Project conception&lt;/strong&gt;&lt;br&gt;
At the planning stage, custom construction software facilitates communication between various parties within the construction process. It allows the entities involved––architect, project manager, foreman, investor, etc.––to collaborate clearly through a singular defined channel, reducing the risk of miscommunication or lost information and paving the way for successful working relationships.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Construction project management&lt;/strong&gt;&lt;br&gt;
Pushing on from this, the potential solutions may also encompass numerous features that streamline the management process and bring harmony between different departments. &lt;/p&gt;

&lt;p&gt;This can be achieved through project management tools such as task and workflow management systems. They allow the users to control and monitor tasks, establishing a chain of responsibility for their completion. They also provide dashboards to access project analytics and data easily, to monitor progress and evaluate which steps need to be taken. Additionally, calendars showing the overall project progress can provide an up-to-date timeline for investors, participants and end buyers (if applicable). &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Monitoring&lt;/strong&gt;&lt;br&gt;
Analytic and reporting tools built into custom software solutions help construction businesses to base their decisions on relevant data. This ensures that decision-making is efficient and grounded in reality, not assumptions. &lt;/p&gt;

&lt;p&gt;By combing and visualizing data from various aspects of the construction project through charts and reports, the project manager would be in a better position to evaluate the progress of a project and the next step to be taken.&lt;/p&gt;

&lt;p&gt;These could later be combined with aerial images from a drone, for example, or digitized AR plans to develop presentations for investors or other interested parties, thus building construction vendors’ reliability and credibility. &lt;/p&gt;

&lt;p&gt;Such integrations are already possible; one such product is a defect management solution that allows construction companies to inspect, report and manage defects while on-site. This helps to improve document management and the efficient resolution of construction defects.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Customer relationships&lt;/strong&gt;&lt;br&gt;
Whether a customer is the end buyer, a sales company or an investor, ensuring productive communication with them is an essential part of project management. While each requires different accountability and reporting standards in this respect, all of them share the need for consistency and updates on the project progress.&lt;/p&gt;

&lt;p&gt;Customer relationship management (CRM) software is one solution that could address this issue. CRM is a system that can be integrated into a wider enterprise ecosystem to keep customers’ profiles. With its help, your customer service specialists as well as sales staff can ensure timely and relevant communication with every customer. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Billing&lt;/strong&gt;&lt;br&gt;
Different companies bill differently––payment on completion, payment schedules, payment per project, the list goes on and on, Yet they have one thing in common: the need for accurate and efficient billing.&lt;/p&gt;

&lt;p&gt;Integrated custom construction software makes that possible by developing Accounts Payable (AP) and Accounts Receivable (AR) systems that help businesses better manage their budgets and payments schedules.&lt;/p&gt;

&lt;p&gt;Solutions in this area may include tailored ticketing solutions, such as &lt;a href="https://www.iflexion.com/portfolio/enterprise-ticketing-system"&gt;eTicket&lt;/a&gt; among others, that allow a business to report more fully on expenditures. Such systems use digital ticketing for tracking the journey of materials, ensuring they reach the right person at the right time as well as correct billing. Ultimately this saves time, energy and money.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing Positive Transformations with Technologies
&lt;/h2&gt;

&lt;p&gt;While these are just some of the areas that could be set to revolutionize how the construction industry develops, the list is by no means exhaustive. &lt;/p&gt;

&lt;p&gt;Each and every business has specific needs that demand unique and innovative solutions. Their early adopters in the construction industry are set to see such benefits as:&lt;/p&gt;

&lt;p&gt;•    The consistency of service with increasingly efficient monitoring&lt;br&gt;
•    Enhancements in the quality control arena&lt;br&gt;
•    The opportunity for business expansion and growth&lt;br&gt;
•    Streamlined project management&lt;br&gt;
•    More efficient and transparent service delivery&lt;br&gt;
•    Better budget management&lt;/p&gt;

</description>
      <category>constructionsoftware</category>
      <category>softwaredevelopment</category>
      <category>dev</category>
    </item>
    <item>
      <title>An Insider’s Take on the Benefits and Challenges of Dedicated Development Teams</title>
      <dc:creator>Iflexion</dc:creator>
      <pubDate>Wed, 20 Nov 2019 08:41:44 +0000</pubDate>
      <link>https://forem.com/iflexion/an-insider-s-take-on-the-benefits-and-challenges-of-dedicated-development-teams-42o9</link>
      <guid>https://forem.com/iflexion/an-insider-s-take-on-the-benefits-and-challenges-of-dedicated-development-teams-42o9</guid>
      <description>&lt;p&gt;As consumers, whenever we need a product or service, our immediate instinct is to go online to find what we need and, of course, read the reviews to see if it lives up to our expectations. After all, anything worth knowing is online. &lt;/p&gt;

&lt;p&gt;When we sit on the other side of the table as business owners, we need to ensure our consumer-facing digital assets bring value and enhance our customers’ experiences with our brand.&lt;/p&gt;

&lt;p&gt;This is the digital challenge faced by organizations worldwide: evaluating and improving their digital assets is no longer a choice but a necessity. From web presence to standalone software and apps to developing value-adding automated solutions, once you’ve decided to embrace new technology, the first question is—how? &lt;/p&gt;

&lt;p&gt;Depending on the size and capabilities of your enterprise, you may be in a position to develop in-house. However, that’s not always the case and many organizations worldwide choose to engage &lt;a href="https://www.iflexion.com/services/dedicated-development-team"&gt;dedicated development teams instead&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;As insiders to the industry, we thought we’d take our time to review some of the advantages of setting up your own dedicated team with a technology engineering vendor and the challenges you may face along the way. &lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Dedicated Development Teams
&lt;/h2&gt;

&lt;p&gt;Like any business decision, involving a dedicated development team into your activities requires careful consideration. These are some of the benefits you can expect when working with a dedicated team:&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Expertise&lt;/strong&gt;&lt;br&gt;
Knowing precisely how to approach a software development solution is half the battle. Dedicated development teams are experienced not only in creating tailored solutions but also in troubleshooting. This means that your digital needs can be met with skill, and fewer issues will be encountered along the way. &lt;/p&gt;

&lt;p&gt;In addition, a dedicated team will be able to advise you clearly on the viability of your project, the ways to bring it to fruition, and timeframes, saving you from the trial-and-error hassle.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Adaptability&lt;/strong&gt;&lt;br&gt;
Perhaps your business already has digital assets and is seeking an update. While your in-house team may be able to maintain the current system, it will be a challenge to update it to meet the latest standards and trends. &lt;/p&gt;

&lt;p&gt;This is because the world of technology is rapidly advancing, and it’s likely your current systems have built up some issues and technical debt along the way. Dedicated specialists know how to take on the issues and solve them through code refactoring and redesign, or even proposing a workable alternative altogether. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Cost-effectiveness&lt;/strong&gt;&lt;br&gt;
Hiring in-house specialists can be a cost-intensive process. Consider the finances needed for hiring, onboarding, salaries, benefits, and insurance. Additionally, to meet your project’s needs, you may not be talking about just hiring one person but a few. Conversely, a dedicated team located on your tech partner’s premises allows you to focus your finances solely on what matters—the results. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Flexibility&lt;/strong&gt;&lt;br&gt;
Dedicated teams provide assistance when and as you need it. Very often, they will be able to address even the most challenging of issues with their cross-disciplinary expertise and access to rare talents in the field. Additionally, as soon as their services are no longer required, you will be able to disengage in these services providing it’s in line with your initial contractual terms.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Collaboration&lt;/strong&gt;&lt;br&gt;
Although we said it’s a choice between in-house and outsourcing, it doesn’t have to be. If appropriate, dedicated development teams can integrate themselves into your current staffing and internal processes seamlessly to deliver results by cooperating and working with your existing team. In addition, they may also be able to deliver tailored training to your staff in order to perform essential maintenance tasks on the software or product at the end of the contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of Working with Dedicated Teams
&lt;/h2&gt;

&lt;p&gt;Although engaging a dedicated development team may seem like a perfect, problem-free solution, it does come with its own unique set of challenges that need to be overcome for success.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Finding the right provider&lt;/strong&gt;&lt;br&gt;
With thousands of providers out there, selecting the right dedicated development team can be a challenge. So, how do you narrow down the selection? Begin by understanding the basics of what you need—a website, a mobile application, or a custom piece of enterprise software? Search for providers who tick the right box and whittle down the selection through verified public reviews or references from past clients. &lt;/p&gt;

&lt;p&gt;Once you have a couple to choose between, get talking. Who knows most about what you need and your business? Who provided the most competitive quote? Which provider did you believe was the best fit as a partner for your business? All this is vital when deciding on the right team to undertake your project. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Communication&lt;/strong&gt;&lt;br&gt;
Having a remote team can be challenging, especially if you’ve never worked with one before. It’s an adjustment to have to write an email, send a message or schedule a call when you’re used to walking into the next room for a quick chat. In addition, perhaps you’ve had the experience that your past providers weren’t communicative, leading to mishandling of tasks and missed deadlines. &lt;/p&gt;

&lt;p&gt;While all this is a real risk of hiring a dedicated development team, taking preventive measures in advance raises your chances of success. For this, put in place a communication strategy with a reporting schedule and have a range of contacts on the partner’s side just in case something doesn’t go to plan. This way, you’ll be able to keep in touch when you need to. &lt;/p&gt;

&lt;p&gt;On the other hand, ensuring you are available when your team needs your feedback is equally important. Be sure to include feedback turnaround times in your project plan, so your team knows when to expect changes and when to push forward. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Time zone issues&lt;/strong&gt;&lt;br&gt;
In the twenty-first century, we have wider access to the worldwide market; this means you are free to choose the perfect development team that meets your needs from almost anywhere in the world. The challenge is, there is no single universal time zone, so that might mean you work while your remote team sleeps and vice versa, or you may only have a few hours of time overlap per day. &lt;/p&gt;

&lt;p&gt;For project success, some adjustments need to be made on both sides to allow constructive communication at the time suitable for everyone. You may also want to take account of and plan for gaps in messaging time and replies. Knowing about them in advance helps you work them into your project plan and avoid delays. &lt;/p&gt;

&lt;h2&gt;
  
  
  In-house vs. Outsource— Round-Up
&lt;/h2&gt;

&lt;p&gt;Deciding whether to outsource or keep in-house is never a simple decision, and your choice will depend on a number of vital factors, such as cost efficiency, skills, available time, and the scope of the project. In order to make the right decision for your business, examine the factors above in line with your business objectives. This should give you a deeper understanding of your next steps and provide you with the basis for choosing the best-matching dedicated development provider. &lt;/p&gt;

</description>
      <category>dedicateddevelopmentteam</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Occlusion: the Problem of Putting Us in the Picture with Augmented Reality</title>
      <dc:creator>Iflexion</dc:creator>
      <pubDate>Fri, 23 Aug 2019 13:54:19 +0000</pubDate>
      <link>https://forem.com/iflexion/occlusion-the-problem-of-putting-us-in-the-picture-with-augmented-reality-3i9h</link>
      <guid>https://forem.com/iflexion/occlusion-the-problem-of-putting-us-in-the-picture-with-augmented-reality-3i9h</guid>
      <description>&lt;p&gt;In 2016, the phenomenon of Pokémon Go gave augmented reality (AR) its first “killer app”—something that virtual reality (VR) had been &lt;a href="https://killscreen.com/articles/failure-launch/" rel="noopener noreferrer"&gt;seeking in vain for 25 years&lt;/a&gt;. In spite of Go's whimsical nature, this was the first time that the public was able to interact in the real world, and in real time, with computer-generated elements that were integrated into a real-life experience. &lt;br&gt;
Pokémon Go was able to accomplish this while working within the technical constraints of common mobile devices, rather than tantalizing consumers with YouTube video demonstrations of impressive feats destined &lt;a href="https://www.theregister.co.uk/2016/12/09/magic_leap_neither_magic_nor_leaping/" rel="noopener noreferrer"&gt;to remain vaporware&lt;/a&gt;—at least in terms of an actual RTM product. In the end, pole place went to the software that actually shipped.&lt;br&gt;
Unfortunately, from a commercial point of view, the free-roaming nature of Go set such ambitious goals for Mixed Reality (MR) that the state-of-the-art was soon to hit an implacable roadblock.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;VR and AR Converge&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Historically, VR was never able to get away from coin-fed mall machines, specialist urban experiences, or a hamstrung series of over-priced, over-tethered, or underpowered home devices over the course of the last thirty years. &lt;br&gt;
The computational and graphical demands of full-view, high-resolution interactive environments are so exhausting that no leap in technology has been able to significantly shrink the bulky headsets or the anchoring support systems they often need, such as a PC. Neither could it remove the sense that this is an isolating, expensive, and anti-social technology at odds with an age beguiled by lightweight wearables and obsessed with digital social interaction. &lt;br&gt;
Augmented reality seemed to provide a more modern approach. First defined in 1990 by Boeing researcher Tom Caudell for a proposed industrial application, AR overlays the real world with artificial elements—a concept that hails back to WWII, and that first hit the car market &lt;a href="https://www.youtube.com/watch?v=AzVV8UUIu48" rel="noopener noreferrer"&gt;in the late 1980s&lt;/a&gt;. &lt;br&gt;
Current commercial interest in AR has been fueled by the development of wearable visors that are much more discreet than VR headsets and that can superimpose interactive digital elements in this way, instead of replacing our view completely with a digital environment as VR does. Market offerings at the moment include Microsoft Hololens, Magic Leap, Google Glass Enterprise Edition, Vuzix Blade, the (struggling) Meta 2, Optinvent's ORA 2, and the Varia Vision, amongst others.&lt;br&gt;
At this stage, the distinction between AR and VR is becoming academic. Since AR can completely obscure the user's vision on demand, and VR systems increasingly make real-world views available, development is headed by consensus towards the &lt;a href="https://www.iflexion.com/blog/mixed-reality-examples" rel="noopener noreferrer"&gt;“Mixed Reality” experience&lt;/a&gt;. &lt;br&gt;
The Pokémon Go phenomenon has fueled consumer enthusiasm for flexible AR experiences that take place in non-controlled environments: streets, parks and open spaces. Advertisers want discoverable creations that dazzle and amaze; gaming companies want virtual worlds overlaid on public worlds. If we were going to stay home or enjoy AR only in the prescribed spaces of special events or fixed attractions, we might as well be back in the 1990s watching the last Virtual Reality bubble deflate under its technical and geographical limitations. &lt;br&gt;
All this means that virtual elements in AR need to behave as if they were real objects—to be able to hide under tables, to disappear behind real structures such as columns, doorways, walls, cars, lampposts, people; even skyscrapers, if the scenario should call for it. In computer vision, this is called occlusion, and it's essential for the future of genuinely immersive augmented reality systems. &lt;br&gt;
However, it's going to be a bit of a problem.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Three Common Approaches to AR Mapping&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;An AR system needs to understand the geometry of the environment it’s in: the tables, chairs, walls, alcoves, and any other object which could potentially need to appear to be in front of a virtual element during the AR session. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows%2Fmixed-reality%2Fimages%2Fsurfacereconstruction.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows%2Fmixed-reality%2Fimages%2Fsurfacereconstruction.jpg" alt="Furniture mapped by the Microsoft HoloLens"&gt;&lt;/a&gt;Furniture mapped by the Microsoft HoloLens &lt;/p&gt;

&lt;p&gt;It also needs to be able to recognize and extract real people who might also be in the AR space with you, since they may end up standing in front of virtual objects or structures.&lt;br&gt;
If the system can’t do this, it can't mask off the sections of a virtual element which logically should not be visible, such as the lower legs of a virtual robot that is standing behind a real table in front of you. Nor, in the case of real participants, can it “cut them out” of any virtual objects that they may be standing in front of.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/nRywAihJ2jM"&gt;
&lt;/iframe&gt;
 &lt;br&gt;A real person sandwiched between two non-real elements: the colorful background and the foreground objects.
 &lt;br&gt;
If the system can extract these “mattes,” but can’t do it fast enough or to an acceptable quality, either the virtual elements will lag behind the real elements, or the matte borders will be indistinct or unconvincing. &lt;br&gt;
There are three primary methods by which AR systems build the invisible 3D environment models that are necessary for realistic occlusion. Some need more preparation than others; some are more suited to certain situations than others, and none are ideal for a truly untethered and spontaneous mixed reality experience.&lt;/p&gt;
&lt;h3&gt;
  
  
  Time-Of-Flight
&lt;/h3&gt;

&lt;p&gt;The Time-Of-Flight sensor pulses infrared light rapidly at the environment and records the time that these emissions are bounced back to the sensor. Objects that are further away will return that light later than nearer objects, enabling the sensor to build up a 3D image of the room space.&lt;br&gt;
Version 2 of Microsoft’s Kinect sensor uses TOF, as do the LIDAR systems common in autonomous vehicle research. These sensors are also &lt;a href="https://web.stanford.edu/~zollhoef/papers/EG18_RecoSTAR/paper.pdf" rel="noopener noreferrer"&gt;widely used&lt;/a&gt; in industrial applications. &lt;br&gt;
Outdoors, TOF has severe limitations, since natural daylight will distort its results, and multiple TOF sensors (which would be helpful to avoid undercuts in AR mapping) are prone to interfere with each other's effectiveness.&lt;br&gt;
A TOF sensor suitable for a portable AR device (dedicated headset or phone) also has a maximum usable range of around four meters, so it certainly can’t help us to put a virtual Godzilla behind the Empire State Building. &lt;/p&gt;
&lt;h3&gt;
  
  
  Stereo Camera
&lt;/h3&gt;

&lt;p&gt;Stereo cameras can generate 3D geometry from their native 2D depth maps, by comparing the differences between the two images. This technique is widely used in AR applications and hardware.&lt;br&gt;
On the plus side, this method can work well in real time, if necessary, allowing for more spontaneous and “live” environment mapping. &lt;br&gt;
Negatively, it works poorly in bad light, fails to account for undercuts (i.e. distinguishing an alcove in a far wall from a doorway) or other geometric anomalies that a preliminary mapping session might have identified, and operates at a significantly lower resolution than the actual camera output, often making for rough and unconvincing mattes.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/JemHaOV_Zpo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;
In this video from Google's Project Tango, where live occlusion maps are generated from RGB-D data, we see the notable difference in quality between the video image and the available depth map.



&lt;p&gt;Worse yet, this approach can be completely &lt;a href="http://www.cs.cmu.edu/~ILIM/publications/PDFs/MKSN-PROCAMS12.pdf" rel="noopener noreferrer"&gt;undermined&lt;/a&gt; when the system is asked to map a blank or featureless area, such as a white wall.&lt;br&gt;
Since a stereo camera mapping system works under the same limitations as a pair of human eyes, it can't distinguish meaningful 3D information beyond a range of four meters. So, once again, Godzilla is out of luck. &lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Light 3D Sensor
&lt;/h3&gt;

&lt;p&gt;Here a striped infra-red light pattern is projected onto the real-world shapes of the environment and its contours reconstructed by calculating the way that the lines distort, from the point of view of an interpreting camera. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Femeralddental.com%2Fuploads%2FMedit%2520Blog.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Femeralddental.com%2Fuploads%2FMedit%2520Blog.jpg" alt="no"&gt;&lt;/a&gt;&lt;/p&gt;
By observing and comparing how straight lines are distorted when projected onto objects, SLM can compare the differences between the two viewpoints at its disposal and deduce the 3D geometry of the objects and the environment.



&lt;p&gt;SLS is used in the forward-facing depth sensors of the iPhone X (the rear sensors use stereo cameras), the focus of many emerging AR technologies and players, and the primary hardware considered for Apple’s influential ARKit.&lt;br&gt;
SLS is undermined by bright ambient light, and therefore not a logical solution to mapping AR scenes that take place outdoors. Like TOF (see above), SLS uses infrared bounce returns and, in a viable AR scenario, is unlikely to place its sensors much further distant from each other than human eyes are placed. This limits the technique to an effective range of—you guessed it: four meters.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Network Solutions?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The low latency and high data throughput of upcoming 5G networks are likely to prove tempting to &lt;a href="https://www.iflexion.com/services/augmented-reality-development" rel="noopener noreferrer"&gt;AR developers&lt;/a&gt; who, in their hearts, might prefer to address these issues on more powerful base-station nodes, and turn the AR headset into a relatively dumb playback device tethered across the network. &lt;br&gt;
But building such responsive network models into core AR frameworks seems likely to limit full-fledged AR systems to urban environments where 5G connectivity is widely available and affordable. In effect, it's yet another potential geographical anchor dragging against the dream of on-the-fly AR experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Redundant Effort in AR Mapping&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There is no real intelligence or persistence behind any of the popular current methods of generating 3D models for occlusion. When an AR system “mattes out” another person in your virtual meeting so that they can appear to be in front of a building that will not begin construction for another eight months, that person is just another “bag of pixels” to the occlusion system. If she walks out of view and back into view, the system has to start analysis on her all over again. &lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FMax_Mignotte%2Fpublication%2F283967510%2Ffigure%2Ffig10%2FAS%3A296589928222723%401447723957352%2FSetup-and-pre-processing-steps-a-Original-depth-map-b-After-clipping-c-After-treadmill.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.researchgate.net%2Fprofile%2FMax_Mignotte%2Fpublication%2F283967510%2Ffigure%2Ffig10%2FAS%3A296589928222723%401447723957352%2FSetup-and-pre-processing-steps-a-Original-depth-map-b-After-clipping-c-After-treadmill.png" alt="no"&gt;&lt;/a&gt;&lt;/p&gt;
 Source photo credit: ResearchGate 



&lt;p&gt;In terms of static objects, this redundancy of effort is even worse: no matter how recognizable an object might be, AR occlusion systems must currently create an occluding mesh from scratch, every time. There are no shortcuts, no templates, and the ponderous and ongoing nature of the analysis makes lag or poor-quality mattes almost inevitable. Either is fatal for an authentic augmented reality experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A “Google Maps” for Augmented Reality Geometry?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The best solution for enabling AR occlusion in public environments would be to download already-existing geometry that has been created and indexed by a tech giant. But generating a 3D “map of the world” at this level of intricacy and resolution is so daunting a task, even for the likes of Apple or Google, that it might need instead to be made practicable by crowdsourcing. &lt;br&gt;
Current efforts in this direction are either limited to the walled gardens of individual tech ecosystems, such as Apple's ARKit and HoloLens' ability to save a user's mappings — or else represented by a myriad of startups apparently hoping for a Google buyout if Polly (see below) becomes a major player in public-facing AR. These include YouAR, Jido, Placenote, and Sturfee, among others.&lt;br&gt;
As such a database grows, it would become more accurate, gradually learning to discard transient structures such as cars, construction equipment, chained bicycles, and seasonal Christmas decorations. Eventually, it will learn to represent a consistent and persistent archive of the geometry and occlusion mapping of the area.&lt;br&gt;
First, however, the information must be gathered at a resolution that’s acceptable for general consumption, at a speed which won’t discourage volunteer contributions, and at a quality that doesn’t need special equipment or elaborate methods. Machine learning seems set to provide the answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Assembling AR Mappings with Machine Learning&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Mapping the complex contours of a car is currently a considerable challenge for state-of-the-art AR. The curves are complex, the surface is reflective, and parts of the object are transparent. &lt;br&gt;
Machine learning, on the other hand, is already &lt;a href="https://towardsdatascience.com/machine-learning-for-vehicle-detection-fd0f968995cf" rel="noopener noreferrer"&gt;well able&lt;/a&gt; to potentially recognize any model of car. Having identified a vehicle, an ML-driven environment recognition system could then download simplified, low-res geometry from a common database of 3D car meshes and position it exactly where the identified car is in real life, enabling occlusion for the vehicle, complete with transparency, at the cost of a few kilobytes of data and a few network seconds of scene-setting. &lt;br&gt;
Such a system could also make use of the growing number of AR/VR meshes at &lt;a href="https://poly.google.com/" rel="noopener noreferrer"&gt;Google Polly&lt;/a&gt;—which, it seems, may eventually become the Google Maps of AR, else fold into Maps as an AR-focused sub-service.&lt;br&gt;
Likewise, ML-enabled AR scanning systems would be able to classify individuals in a scene and understand what (and perhaps who) they are, if they should disappear from view and then reappear again.&lt;br&gt;
Further, local mobile neural networks could solve the four-meter scanning limitation by recognizing objects semantically. They would even be capable of distinguishing between objects that are impermanent (such as parked cars and bystanders), objects which are more likely to need occlusion (lamp-posts), and distant objects such as skyscrapers. &lt;br&gt;
With such a comprehensive understanding of scale and depth, currently unavailable to AR without specialist pre-mapping, it would finally be possible to create augmented reality experiences that use occlusion intelligently and without dimensional limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Generating 3D models for Occlusion with Neural Networks&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Not every item in a scene would have a corresponding low-poly mesh available from a network. Sometimes the device would need to recreate the geometry the hard way, as is currently the standard practice.&lt;br&gt;
Luckily, creating 3D meshes from the depth maps of RGB-D images is one of the core pursuits in computer vision research. In this period, a handful of companies are cautiously releasing ML-based systems for generating geometry with low-impact, on-device neural networks.&lt;br&gt;
One such is Ubiquity6, which claims to use a mix of deep learning, 3D mapping, and photogrammetry to recreate an observed environment in an &lt;a href="https://www.wired.com/story/this-startup-makes-augmented-reality-socialand-ubiquitous/" rel="noopener noreferrer"&gt;impressively fast 30 seconds&lt;/a&gt;. &lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hZrveuAURxo"&gt;
&lt;/iframe&gt;
&lt;br&gt;
However, the company has released no specific details of its methodology and seems to avoid the subject of occlusion.&lt;br&gt;
Spun out of the Oxford Active Vision Lab, 6D.ai is &lt;a href="https://twitter.com/mattmiesnieks/status/1017239132303527936?lang=en" rel="noopener noreferrer"&gt;less shy&lt;/a&gt; about the thorny subject of occlusion:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/q9PFFqAABBM"&gt;
&lt;/iframe&gt;
&lt;br&gt;
Like the vast majority of seminal AR occlusion systems in the headlines, evidence of object-hiding is generally of the “blink, and you’ll miss it” variety. In this 6d.ai Twitter video, a ball drops off a domestic surface:&lt;br&gt;
&lt;a href="https://twitter.com/wintermoot/status/1020133483144810496" rel="noopener noreferrer"&gt;https://twitter.com/wintermoot/status/1020133483144810496&lt;/a&gt;&lt;br&gt;
A brief glimpse of the geometry (in green, below), suggests the rough edges of the standard polygon shape generated by a low-resolution depth-map layer:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsun9-1.userapi.com%2Fc851036%2Fv851036402%2F19a76e%2FH6lXNX3IEQ4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsun9-1.userapi.com%2Fc851036%2Fv851036402%2F19a76e%2FH6lXNX3IEQ4.jpg" alt="no"&gt;&lt;/a&gt;&lt;/p&gt;
 Source photo credit: Twitter 



&lt;p&gt;In this video, the company demonstrates the occlusion potential of its system:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NZwt75eii4Q"&gt;
&lt;/iframe&gt;
&lt;br&gt;
The occlusion mapping is very approximate, even in this showcase, and even where objects are simple cuboids.&lt;br&gt;
This may demonstrate an inherent problem in AR systems aimed at mobile devices, which are likely to need to translate even the cleanest poly meshes into relatively pixelated RGB approximations with jagged edges that distort the occlusion. Though 6D.ai claims that its system uses no depth cameras, it could be coming up against this bottleneck in practice. Without details of the implementation, it is hard to tell.&lt;br&gt;
The meshes that the 6D.ai system creates are actually more complex and resource-intensive than would be needed for occlusion, with a lot of redundant detail, such as the indentation in the cushions in the mapping of this sofa:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsun9-56.userapi.com%2Fc851036%2Fv851036402%2F19a776%2Frlqyy-ZdnhM.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsun9-56.userapi.com%2Fc851036%2Fv851036402%2F19a776%2Frlqyy-ZdnhM.jpg" alt="no"&gt;&lt;/a&gt;&lt;/p&gt;
 Source photo credit: artillry.co 



&lt;p&gt;Only a small fraction of the points generated (usually by RGB-D depth maps) in this way are needed to create useful occlusion. However, building such a frugal and lightweight mesh may require a more intelligent approach to geometry creation, and perhaps a deeper and more inventive application of local neural networks. &lt;br&gt;
We can also note from the above video (1:30) that the “live meshing,” which maps the real world in real time, never attempts to map anything further away than four meters, or risks pointing the camera up beyond that distance. Here is where the machine learning-based AR mapping systems of the future seem set to lend a hand, by freeing the AR system from the three limited methods of geometry mapping mentioned earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Machine Learning Solutions Go Mobile&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In 2014, researchers from the University of Bonn &lt;a href="https://web.archive.org/web/20180812015223/https:/www.ais.uni-bonn.de/papers/KI_2014_Hoeft_RGB-D_Semantic_Segmentation.pdf" rel="noopener noreferrer"&gt;proposed&lt;/a&gt; a system of “live” segmentation capable of generating occlusion, driven by convolutional neural networks (CNNs). The technique involves classifying every pixel in a video frame according to the depth map data of the RGB-D image.&lt;br&gt;
This is one of many machine learning-based extraction and geometry-generating techniques that may eventually be &lt;a href="https://pdfs.semanticscholar.org/d789/6d6be118386a1f76f389210ca4e3a87b0d4a.pdf" rel="noopener noreferrer"&gt;transferable to mobile devices&lt;/a&gt;. All such approaches depend on the evolution of local machine learning hardware and software implementations on popular, portable devices.&lt;br&gt;
Fortunately, there is a notable impetus in ML research towards this optimization and migration to local processing where possible—a movement driven by the demands of IoT and Big Data, and the enthusiasm of consumer hardware producers to leverage machine learning in their devices. Slimmed-down machine learning frameworks and workflows are being met halfway by the major manufacturers incorporating AI-oriented hardware into their product lines. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Useful Limitations&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Solving occlusion still leaves some other issues to clear up, such as effective hand presence, matched lighting, shadows, and ghosted overlaid images. However, nothing affects the “reality” of AR more than the potential to integrate virtual worlds into our world.&lt;br&gt;
It may be beneficial to the ultimate development of AR mapping systems that current resources are so scant, and the margins so critical, since these conditions are typically a spur to invention. The question is whether those constraints will produce the ingenious breakthroughs in time to fend off a feared AR winter.&lt;br&gt;
The failure of the ponderous VR model to take flight in over three decades, combined with the (to date) unique phenomenon of Pokémon Go, indicates that AR must aim higher than tethered home video games, or special urban events that a majority of consumers live too far away from. It cannot afford to be an urban or exclusive technology, because the intense effort and infrastructure that will support it needs the same economy of scale as the smartphone sector itself.&lt;/p&gt;

</description>
      <category>ar</category>
      <category>augmentedreality</category>
    </item>
  </channel>
</rss>
