Frequently Asked Questions
What are the advantages of LiDAR in comparison to Cameras and Radar ?
Where is Outsight located ?
Do you work with Integration partners ?
Is it possible to integrate data from sensors of different LiDAR manufacturers ?
Where can I learn more about the technology and its applications ?
What are the applications of LiDAR technology ?
What is the extent of your expertise in processing 3D data ?
How should one plan on the selection and placement of LiDAR systems ?
Does your company provide the hardware for data processing ?
Are you hiring ?
Shift Perception Software
Does your software operate on the Edge or is it Cloud-based ?
What do you mean by real-time processing ?
Do you provide the hardware for data processing?
What you mean by 'Open' and 'Standard' data format ?
Can Shift Perception integrate data from sensors of different manufacturers ?
Can I use a Wireless connection?
What degree of performance can one anticipate from your systems ?
Do I need to store the output data in a database?
Shift Analytics Software
Does Shift Analytics operate on the Edge or is it Cloud-based?
Are these analytics calculated in real-time?
Do I need to store the output data in a database?
Why using LiDAR to generate Spatial Analytics ?
Can I use your KPI data with my own Dashboard?
What degree of performance can one anticipate from your KPI calculations?
How scalable is your system ?
Do I need consultancy services to build my analytics?
Shift Simulator Software
Do I need to install a Software in my computer?
Can I simulate any LiDAR Sensor that exists?
Can I plan the setup of a large premise or venue?
To what extent the solution has been proven?
Can I do my own simulations?
When should I make simulations?
Does the simulator work only for people monitoring applications?
Can I use it for Automotive ADAS simulations?
Spatial AI
What is Spatial AI?
How Spatial Intelligence can be applied to Airport Operations?
What's the difference between Spatial AI and Physical AI?
About Outsight
What is Outsight?
Outsight is the global leader in Physical AI and 3D Spatial Intelligence. The company develops software that processes real-time data from LiDAR sensors to continuously track how people and vehicles move, behave, and interact within physical environments. This technology, known as Spatial Intelligence, is delivered through Motional Digital Twins — live digital replicas of physical flows that enable infrastructure operators to optimize operations, enhance visitor experiences, and strengthen safety and security.
Outsight's platform, called Shift, is deployed at scale across five continents in airports, tourism venues, hospitals, factories, stadiums, retail spaces, and road infrastructure. Outsight is the only company in the LiDAR software processing space to have won a CES Best of Innovation Award, and has been recognized seven times by Gartner as a category-defining emerging leader in Spatial Computing and Digital Twin technologies. The company serves some of the world's busiest airports, including Dallas Fort Worth International Airport, Paris-Charles de Gaulle, and Rome Fiumicino, among many others.
Where is Outsight headquartered and where does it operate?
Outsight operates globally from three offices: Paris (France), San Francisco (USA), and Hong Kong. From these locations, the company supports customers and deployments across five continents, including North America, Europe, Asia, the Middle East, and beyond.
The company's global footprint reflects the universal nature of the challenges it addresses: wherever infrastructure operators manage complex flows of people and vehicles — whether in airports, smart cities, stadiums, or industrial sites — Outsight's Spatial Intelligence platform delivers value. Strategic partnerships with organizations like NEC, AWS, and Embotech further extend Outsight's reach into key markets worldwide.
Who founded Outsight and when?
Outsight was founded in 2019 by Raul Bravo and Cédric Hutchings. Both founders bring deep expertise in hardware, software, and scaling technology companies internationally. Cédric Hutchings serves as CEO, and Raul Bravo serves as President.
Since its founding, Outsight has grown into the most experienced and most awarded team in the Physical AI and 3D LiDAR software industry. In just a few years, the company has moved from foundational research and development to large-scale commercial deployments, earning recognition from institutions like Gartner, Frost & Sullivan, the European Innovation Council, CES, and many others.
What awards and recognitions has Outsight received?
Outsight is the most awarded company in the Physical AI and 3D LiDAR industry. Its recognitions span innovation excellence, institutional validation, and compliance certifications. Key awards include the CES Best of Innovation Award in Smart Cities, the Prism Award for Transportation (making Outsight the youngest company ever to receive it), the Frost & Sullivan Global Technology Innovation Leadership Award, the Edge AI and Vision Product of the Year, the Airport Technology Excellence Award for Innovation, the Terminal Excellence Innovation Award at Inter Airport Europe, and the Gold Medal at Data & AI Night in the Innovative Solutions category.
On the institutional side, Outsight has been cited seven times by Gartner — including as a key player in Spatial Computing alongside Nvidia, Meta, and Alphabet — selected as one of the top 50 European start-ups by the European Innovation Council, and included in Sifted's B2B SaaS Rising 100. The company also holds SOC 2 compliance, ISO 27001 certification, and the unique BASt certification from the German Federal Highway and Transport Research Institute for highway truck parking monitoring using native 3D sensors — the only company worldwide to have achieved this.
What major partnerships has Outsight established?
Outsight has formed strategic partnerships with leading technology and infrastructure companies to accelerate the global adoption of Spatial Intelligence. Key partnerships include a collaboration with NEC Corporation of America to deliver advanced airport operational intelligence by integrating Outsight's Spatial AI into NEC's airport management platform, and a strategic partnership with Amazon Web Services (AWS) that makes Outsight's platform available on AWS Marketplace for large-scale deployments across transportation hubs.
Outsight also collaborates with Embotech for automated vehicle marshalling systems, with Hesai, Ouster, Innoviz, Robosense, Seyond, and other major LiDAR hardware manufacturers for multi-vendor sensor compatibility, and with GridMatrix for smart city traffic analytics. These partnerships reflect Outsight's position as a platform company that works across sensor manufacturers, cloud providers, and system integrators to deliver end-to-end Spatial Intelligence solutions.
Physical AI & Spatial Intelligence
What is Physical AI?
Physical AI, also known as Spatial AI, is a branch of artificial intelligence designed to understand and analyze movement and behavior in three-dimensional space. It processes data from 3D sensors such as LiDAR to track individual objects, recognize patterns, and generate insights about how people and assets interact within physical environments. Physical AI transforms raw 3D sensor data into structured, spatially defined events — creating the rich, contextualized datasets that both human operators and Agentic AI systems require.
Physical AI represents the fourth major data modality in artificial intelligence, after text, audio/speech, and image/video. While traditional AI excels at processing language, sound, and 2D images, Physical AI brings intelligence to the three-dimensional physical world. This makes it the technological foundation for applications that require understanding how things move and interact in real spaces — from monitoring passenger flows in airports to guiding autonomous robots in warehouses. Outsight's platform is the most proven implementation of Physical AI at scale, powering Motional Digital Twins deployed across five continents.
What is Spatial Intelligence?
Spatial Intelligence refers to the actionable insights generated when Physical AI processes data from 3D native sensors and other external sources within a consistent 3D reference system. It is the concept of being able to successfully perceive and derive meaningful, decision-ready insight from spatial data — understanding not just where things are, but how they move, behave, and interact in the physical world.
In practice, Spatial Intelligence answers questions that were previously impossible to address at scale: How long is a passenger actually waiting in a queue? Which path through a terminal leads to the most retail engagement? Where is congestion likely to form in the next 15 minutes? How are vehicles interacting with pedestrians at an intersection? By making these invisible physical flow patterns visible and quantifiable, Spatial Intelligence enables infrastructure operators to shift from reactive management to proactive, data-driven decision-making. Outsight's Motional Digital Twins are the concrete medium through which operators access and leverage the value of Spatial Intelligence.
How does Physical AI relate to other AI modalities like text, audio, and vision?
Artificial intelligence has evolved through several major data modalities. The first wave focused on text — natural language processing, search, and generative AI. The second addressed audio and speech — voice assistants, transcription, and synthesis. The third brought image and video understanding — object recognition, classification, and scene analysis through 2D cameras. Physical AI represents the fourth and newest frontier: 3D spatial data.
What sets Physical AI apart is that it deals with the real, physical world in three dimensions and in real time. While image-based AI analyzes flat pixel grids, Physical AI works with 3D point clouds that inherently contain distance, volume, speed, and spatial relationships. This enables capabilities that 2D vision cannot achieve: centimeter-level positioning accuracy, reliable tracking through crowds and occlusions, operation in complete darkness, and — critically — full privacy preservation since no images are captured. Outsight is at the forefront of this fourth AI modality, building the software that transforms 3D sensor data into the Spatial Intelligence that operators and AI agents need to understand and optimize physical environments.
What is the difference between Spatial AI and traditional computer vision?
Traditional computer vision is based on 2D image and video analysis. It processes flat pixel arrays captured by cameras, using algorithms to detect objects, recognize faces, classify scenes, and track movement within the frame. While powerful, it is inherently limited by the 2D nature of its input: it cannot natively measure distances, volumes, or precise 3D positions without additional estimation techniques. It is also affected by lighting conditions, occlusions, and the significant privacy concerns that come with capturing identifiable imagery.
Spatial AI, by contrast, operates natively in three dimensions. It processes 3D point clouds from sensors like LiDAR, where every data point includes precise distance, position, and geometric information. This enables accurate measurement of real-world spaces, reliable tracking of individuals even in dense crowds, operation regardless of lighting, and privacy by design — since no images or personally identifiable information are ever captured. Spatial AI does not replace traditional computer vision; rather, it adds a fundamentally new layer of understanding. Outsight's platform can integrate camera data for classification purposes while using LiDAR as the backbone for positioning and tracking, combining the strengths of both modalities within a unified 3D reference system.
Motional Digital Twins
What is a Motional Digital Twin?
A Motional Digital Twin (MDT) is a real-time digital replica of a real-world space that delivers Spatial Intelligence by continuously tracking how every person and vehicle moves and interacts within that environment. Unlike traditional Digital Twins that monitor static buildings and assets, Motional Digital Twins capture the dynamic reality of physical flows — the movement of people, vehicles, baggage, and equipment — with centimeter-level precision and full privacy preservation through anonymous tracking.
An MDT collects and processes real-time data from native 3D sensors such as LiDAR, combined with supplementary sources from external sensors and systems like flight information displays, point-of-sale terminals, access control, cameras (for classification only), and Wi-Fi/Bluetooth data. All of these sources are anchored to the same 3D coordinate system, the same time axis, and the same object identifiers. This creates a unified, continuously updated model of reality that enables operators to monitor what is happening now, analyze what happened in the past, and predict what will happen next. Outsight currently deploys Motional Digital Twins at scale across airports, tourism venues, hospitals, factories, stadiums, retail environments, and road infrastructure on five continents.
How do Motional Digital Twins differ from traditional Digital Twins?
The Digital Twin concept has evolved through three distinct generations. The first generation — Building Information Modelling (BIM) — focused on static, record-based representations used primarily during design and construction. The second generation introduced real-time monitoring of fixed assets through IoT sensors, enabling predictive maintenance of equipment like HVAC systems or escalators. Motional Digital Twins represent the third and most advanced generation, adding the critical dimension of real-time movement digitization.
The fundamental difference is that traditional Digital Twins model static structures and assets, while Motional Digital Twins model how people and vehicles move through and interact with those structures. A traditional Digital Twin might tell you that a terminal has 20 check-in counters; a Motional Digital Twin tells you that counter 7 has a 12-minute wait, that 340 passengers are currently in the security hall, that congestion is building near Gate B22, and that a VIP passenger's predicted arrival time at the gate is 8 minutes. This shift from static to dynamic, from asset-centric to flow-centric, is what makes Motional Digital Twins transformative for day-to-day operations.
What are the core capabilities of a Motional Digital Twin?
A Motional Digital Twin is built on three fundamental processing layers. The first is Localization — the ability to continuously track the precise position of every person and object across an entire premises, assigning a unique anonymous identifier to each. This is the cornerstone on which all other capabilities depend. The second layer is Perception — understanding the surrounding environment, classifying objects (people, staff, vehicles, vulnerable road users), detecting behaviors (running, loitering, queueing), and associating attributes from external data sources with each tracked individual. The third layer is Analytics — transforming raw tracking and perception data into actionable KPIs, real-time alerts, historical trends, and predictive insights.
These three layers work together to deliver a complete picture of physical flows. The MDT provides insights across three time horizons: the present (live 3D maps, real-time dashboards, immediate alerts), the past (full journey replay, historical KPI trends, the ability to "time travel" to any past moment), and the future (anticipated KPIs, per-individual journey predictions, early warnings of congestion or safety risks). All of this is presented through an intuitive Live 3D Map that provides a shared spatial reference for all stakeholders, with native privacy preservation — no images are captured or displayed, and every person is represented by an anonymized symbol.
What types of insights do Motional Digital Twins provide?
Motional Digital Twins continuously deliver insights on each individual's profile (who they are in terms of classification — passenger, staff, vehicle, vulnerable road user), position (where they are with centimeter-level precision), behavior and interactions (what they are doing, including queueing, dwelling, running, or interacting with assets), timing (when events occur, with precise timestamps), and predictions (what is likely to happen next based on patterns and historical data).
Concretely, this translates into dozens of distinct KPIs: zone-based metrics like occupancy and dwell time, queue metrics like wait times and service times, line-crossing metrics like directional flow counts, and asset utilization metrics like usage rates and open/closed status of service points. These insights serve multiple stakeholders simultaneously: operations teams use them for staffing and congestion management, commercial teams use them for retail optimization and revenue growth, security teams use them for threat detection and compliance, and planning teams use them for data-driven simulations and infrastructure design. The platform delivers these insights through live 3D visualization, dashboards, PDF reports, CSV exports, APIs, and real-time alerts via SMS or email.
What is Continuous Individual Tracking and why does it matter?
Continuous Individual Tracking is the cornerstone capability of a Motional Digital Twin. It is the ability to precisely position every person or object across an entire premises — even large, multi-zone facilities — while assigning a unique, consistent anonymous identifier that is maintained throughout the individual's complete journey, from entry to exit.
This capability is foundational because everything else depends on it. Without a stable, unique identifier per individual, it is impossible to measure actual wait times (as opposed to estimates), construct complete end-to-end journeys, associate external data (like flight information) with specific individuals, distinguish staff from visitors in KPI calculations, or make meaningful predictions about future behavior. LiDAR technology's ability to natively perceive in 3D, combined with Outsight's advanced Physical AI software, enables Continuous Individual Tracking even in complex, crowded environments where camera-based systems typically lose track of individuals due to occlusions, lighting changes, or visual similarity. This is what makes LiDAR-based Spatial Intelligence fundamentally more reliable than legacy monitoring approaches.
How do Motional Digital Twins support Agentic AI?
Motional Digital Twins generate unprecedented volumes of structured, spatially and temporally consistent data — capturing both granular individual-level insights and comprehensive aggregated KPIs. This structured richness creates an ideal environment for AI Agents to operate and evolve. Outsight defines four maturity levels for AI agents within the MDT ecosystem: Reporter (delivering factual outputs in natural language), Insighter (uncovering hidden patterns and anomalies), Predictor (anticipating future states and providing forecasts), and Prescriptor (recommending optimal actions by evaluating scenarios and trade-offs).
For example, a Reporter agent can answer "What is the current waiting time at United Airlines check-in?", while a Prescriptor agent can build an optimized workforce schedule for passport control that keeps wait times under three minutes. Because Motional Digital Twins attach rich context — position, trajectory, classification, behavior, timestamps, and external data — to every tracked individual, AI agents have the structured, real-world grounding they need to reason about physical environments in ways that were previously impossible. This positions Motional Digital Twins as a critical data infrastructure layer for the emerging era of Agentic AI applied to physical operations.
LiDAR Technology
What is LiDAR and how does it work?
LiDAR stands for Light Detection and Ranging. It is a remote sensing technology that measures distances by emitting pulses of invisible laser light, timing how long it takes for the light to reflect back from surrounding surfaces, and repeating this process millions of times per second. The result is a dense, highly accurate three-dimensional representation of the environment called a point cloud — a collection of millions of 3D data points that precisely map the geometry of everything in the sensor's field of view.
LiDAR was initially developed by NASA in the 1970s for space applications. Since then, massive investments — particularly from the automotive industry — have driven the technology to maturity, creating a diverse ecosystem of LiDAR manufacturers worldwide. Today's LiDAR sensors offer performance characteristics that make them ideal for infrastructure monitoring: they produce highly detailed spatial measurements, work reliably day and night regardless of lighting conditions, and — critically — they do not capture images or personally identifiable information, making LiDAR a privacy-preserving sensing technology by design.
Why is LiDAR the enabling technology for Spatial Intelligence?
LiDAR is the only sensing technology that can accurately perceive in three dimensions over long distances in real time. This combination of native 3D data, precision, range, and privacy makes it uniquely suited to power Spatial Intelligence. Where cameras produce flat 2D images that require complex algorithms to estimate depth and distance, LiDAR inherently provides exact 3D coordinates for every point it measures. This enables centimeter-level positioning accuracy, reliable detection of people and objects at distances up to hundreds of meters, and consistent performance in any lighting condition — including total darkness.
LiDAR also enables what Outsight calls "occlusion-free perception" when multiple sensors are fused together. Because each laser pulse is natively positioned in a 3D coordinate system, data from multiple LiDAR units can be seamlessly merged into a single, unified point cloud — creating a virtual omniscient sensor that sees around corners and through crowds. This is fundamentally different from cameras, where merging overlapping views requires complex image processing and still cannot resolve objects hidden behind others. Combined with the fact that LiDAR captures geometry rather than imagery, these properties make it the third generation of people and vehicle monitoring technology — recognized as such by multiple industry awards — and the foundation on which Motional Digital Twins are built.
How does LiDAR compare to cameras and other sensing technologies?
LiDAR, cameras, stereovision, Wi-Fi/Bluetooth, and radar each have different strengths and limitations for monitoring people and vehicle flows. Cameras offer familiar imagery and can support classification tasks, but they are affected by lighting conditions, raise significant privacy concerns, struggle with occlusions in crowded environments, and cannot natively measure 3D distances. Stereovision provides some depth estimation but is limited in range and accuracy. Wi-Fi/Bluetooth tracking can cover large areas but offers low precision (meters, not centimeters) and cannot reliably track individuals. Radar works in all weather but lacks the resolution needed for precise person tracking.
LiDAR combines the most important advantages: it provides native 3D spatial data with centimeter-level accuracy, operates in any lighting condition including complete darkness, covers large areas with fewer sensors, and preserves privacy by design since no images are captured. When paired with advanced Spatial AI software like Outsight's Shift platform, LiDAR enables capabilities that no other single technology can match — particularly Continuous Individual Tracking across entire premises, which is the foundation of Motional Digital Twins. That said, Outsight's platform is designed to be multi-modal: it can integrate camera data for classification, Wi-Fi data for broader context, and business system data (POS, flight information, access control) into a unified 3D reference frame, using LiDAR as the spatial backbone.
What is occlusion-free (shadowless) perception?
Occlusion-free perception, also called shadowless perception, is a unique capability enabled by fusing data from multiple LiDAR sensors into a single, unified 3D point cloud. In any physical environment, individual sensors — whether cameras or LiDAR — have blind spots: objects can be hidden behind other objects, people can be obscured by other people, and certain areas may fall outside a sensor's field of view. This is the occlusion problem, and it is one of the most significant challenges in monitoring crowded, complex spaces.
The breakthrough with LiDAR is that because each laser pulse is natively positioned in a 3D coordinate system, advanced fusion software like Outsight's Shift Perception can seamlessly merge data from multiple sensors into a common 3D point cloud. Each sensor contributes its unique viewpoint, and the combined result is far richer than any single sensor alone — effectively creating a virtual omniscient 3D sensor that sees around and through obstacles. Outsight describes this as "1 + 1 = 3": the fused perception is qualitatively superior to the sum of its parts. This requires sophisticated calibration and synchronization across potentially hundreds of LiDAR units — a challenge that Outsight routinely handles in airports, train stations, and factories where thousands of people must be tracked in real time.
Are all LiDAR sensors the same?
No — unlike cameras, which are technically similar and use the same underlying principles, LiDAR sensors can be built in fundamentally different ways. Manufacturers must choose among many design variables: device type (rotating, dome-like, narrow field of view), illumination technique (scanning, flash, hybrid), detection method (time of flight, FMCW), laser technology and wavelength, field of view (from 20° to 360° both horizontally and vertically), angular resolution, points per second, range, and more. The theoretical number of possible LiDAR configurations exceeds 15 million, and in practice, commercially available sensors vary enormously in their characteristics and cost — from a few hundred dollars to over $20,000 per unit.
This diversity means that no single LiDAR sensor is optimal for every situation. On the same premises, an operator may need wide-field, high-density sensors for indoor halls with low ceilings, long-range sensors for outdoor parking areas, and dome-type sensors for corridor monitoring. Most Outsight customers use an average of three different hardware vendors on the same site. Outsight's platform addresses this complexity through two key capabilities: the Shift Simulator, which enables multi-vendor performance and cost evaluation during the design phase, and Shift Perception, which provides vendor-agnostic processing during operations — ingesting and fusing data from any combination of LiDAR manufacturers and models in real time.
The Shift Platform
What is Outsight's Shift platform?
Shift is Outsight's Spatial Intelligence software platform — the concrete implementation of Outsight's Physical AI and Spatial AI technology. It is the engine that transforms raw 3D LiDAR data from any manufacturer into actionable Spatial Intelligence, delivered through Motional Digital Twins. Shift is deployed at scale across five continents and processes data in real time, handling the massive computational demands of tracking thousands of individuals simultaneously with centimeter-level accuracy.
The Shift platform consists of several modules that can be combined to address different use cases and deployment phases: Shift Perception for real-time 3D data ingestion and processing, Shift Analytics for business intelligence and KPI computation, and the Shift Simulator for project planning and sensor optimization. Shift is hardware-agnostic — it works with LiDAR sensors from all major manufacturers — and delivers its outputs through live 3D maps, dashboards, PDF reports, CSV exports, APIs, and real-time alerts. The platform supports over-the-air updates and is designed to scale from single-zone installations to premises-wide deployments covering entire airports or industrial campuses.
What is Shift Perception?
Shift Perception is the foundational processing module of Outsight's Shift platform. It ingests and combines raw 3D data from multiple LiDAR sensors — potentially from different manufacturers, with different data formats, scanning patterns, and performance characteristics — and transforms this massive stream of data into structured, actionable information in real time. The core output is Continuous Individual Tracking: every detected person, vehicle, or object is assigned a unique anonymous identifier and tracked with centimeter-level precision throughout their entire journey across the monitored premises.
Shift Perception handles the most computationally intensive and technically challenging aspects of the pipeline: multi-sensor calibration (aligning all sensors to a common 3D coordinate system), temporal synchronization (reconciling the fact that different sensors scan the environment at different times), point cloud fusion (merging data from all sensors into a single, occlusion-free 3D representation), object detection and clustering, and real-time tracking. The module also supports integration with non-LiDAR data sources such as cameras (for classification), Wi-Fi, IoT sensors, and business systems, anchoring all information to the same spatial and temporal reference. This multi-vendor, multi-modal fusion capability is a key differentiator for Outsight and the reason its platform can be deployed in diverse, heterogeneous environments.
What is Shift Analytics?
Shift Analytics is the business intelligence module of Outsight's Shift platform. It takes the structured output of Shift Perception — continuous tracks, classifications, behaviors, and events associated with each individual — and computes the business KPIs, alerts, historical analyses, and predictions that operators need to manage their infrastructure effectively.
Shift Analytics provides insights across three time horizons: real-time (live dashboards, current occupancy, active queue times, immediate alerts), historical (trend analysis, journey replay, the ability to "time travel" in the 3D map to any past moment), and predictive (forecasted congestion, anticipated demand, early warnings of bottlenecks). The module supports dozens of KPI types — zone-based metrics (occupancy, dwell time, throughput), queue metrics (wait time, service time, queue length), line-crossing metrics (directional flow counts), and asset utilization metrics (usage rate, open/closed status). Outputs are delivered through a live 3D map, web dashboards, PDF reports, CSV exports, RESTful APIs, and real-time alerting via SMS, email, or integrated systems. Shift Analytics also stores raw and processed data in a data lake, enabling long-term trend analysis and supporting Agentic AI applications.
What is the Shift Simulator?
The Shift Simulator is a software module that provides a user-friendly way to plan and optimize a LiDAR project before any physical installation takes place. Selecting the right sensors during the design phase of a Motional Digital Twin deployment can be challenging given the wide diversity of LiDAR hardware available on the market — sensors that vary dramatically in field of view, resolution, range, scanning pattern, and cost.
The Shift Simulator allows users to evaluate different LiDAR models from multiple manufacturers within a virtual 3D representation of their premises. They can plan optimal sensor placement, simulate coverage and identify blind spots, compare performance versus cost for different configurations, and run "what-if" scenarios to determine the best combination of hardware for their specific environment and use cases. This multi-vendor simulation capability is unique in the industry and can save significant time and money by optimizing the hardware design before deployment, avoiding both over-investment in expensive sensors and under-performance from inadequate coverage.
Is the Shift platform compatible with LiDAR sensors from different manufacturers?
Yes — hardware freedom is a core design principle of Outsight's Shift platform. Shift Perception is built to ingest and process data from any relevant LiDAR manufacturer, regardless of the sensor's specific technology, data format, or scanning pattern. This vendor-agnostic approach is critical because, as noted, no single LiDAR sensor is optimal for every situation. Different areas within the same premises may require different sensor types, and operators should not be locked into a single hardware vendor.
Outsight has decades of cumulative experience working with all major LiDAR manufacturers, including Hesai, Ouster, Robosense, Seyond (formerly Innovusion), and many others. Most Outsight customers deploy an average of three different LiDAR vendors on the same premises, using each sensor type where it performs best. The Shift platform abstracts this hardware complexity entirely: operators and their systems interact with a unified stream of Spatial Intelligence regardless of which sensors generated the underlying data. This approach also future-proofs deployments, as new or improved sensor models can be integrated without replacing the entire software stack.
Privacy & Security
How does Outsight ensure privacy?
Privacy is embedded in the fundamental design of Outsight's technology, not added as an afterthought. LiDAR sensors capture geometry, not imagery — they emit laser pulses that measure distances and create 3D point clouds, but they never record faces, license plates, or any other personally identifiable visual information. Every person in an Outsight Motional Digital Twin is represented as an anonymous 3D shape with a unique identifier — there is no way to visually identify who that person is from the LiDAR data alone.
This "privacy by design" approach means that Outsight's platform can monitor and track individuals across entire premises — providing the same (or better) analytical insights as camera-based systems — without ever compromising personal privacy. This is particularly important in contexts where privacy concerns are acute: public transportation hubs, retail spaces, tourism sites, hospitals, and any facility operating under strict data protection regulations. The anonymous nature of LiDAR-based Spatial Intelligence has been specifically praised by industry judges, including the jury of the Data & AI Night Gold Medal award, and is increasingly recognized as a decisive advantage over camera-based alternatives.
Does Outsight's technology comply with GDPR and other privacy regulations?
Yes. Because Outsight's LiDAR-based technology does not capture images, faces, or any personally identifiable information, it inherently aligns with the principles of data protection regulations like GDPR, CCPA, and similar frameworks worldwide. The data processed by Outsight consists of anonymous 3D point clouds and derived metrics — there is no personal data to protect, minimize, or consent to in the traditional sense.
This is a fundamental architectural advantage. While camera-based monitoring systems require extensive compliance measures — consent mechanisms, data anonymization pipelines, retention policies, access controls for identifiable footage — Outsight's approach sidesteps these challenges entirely. The technology provides the operational insights that infrastructure operators need while respecting the privacy expectations of passengers, visitors, shoppers, and workers. This privacy-first design makes Outsight's solutions easier to deploy and more acceptable to the public, regulators, and privacy advocacy groups.
What compliance certifications does Outsight hold?
Outsight holds three significant compliance certifications. First, SOC 2 compliance, which demonstrates Outsight's commitment to the highest standards of security and customer data protection — an essential prerequisite for deployments in critical infrastructure like airports and government facilities. Second, ISO 27001 certification, which attests to the implementation of an information security management system meeting the most demanding international standards, strengthening the confidence of infrastructure operators who entrust Outsight with the processing of their operational data.
Third, and uniquely, Outsight holds BASt certification from the German Federal Highway and Transport Research Institute (Bundesanstalt für Straßenwesen) for highway truck parking monitoring using native 3D sensors. Outsight is the only company worldwide to have obtained this certification, which is renowned for the rigor of its standards and attests to the reliability and accuracy of the technology under demanding operational conditions. Together, these certifications position Outsight as a trusted partner for deployments in the most security-sensitive and operationally critical environments.
Market Applications
How is Outsight's technology used in airports?
Airports are one of the most advanced and widely deployed use cases for Outsight's Motional Digital Twins. The platform provides real-time Spatial Intelligence across the entire passenger journey — from curb and parking through terminal entry, check-in, security screening, retail zones, gates and boarding, baggage reclaim, and exit to ground transport. At every touchpoint, the MDT makes flows visible, measurable, predictable, and controllable.
Concretely, airport operators use Outsight's platform for predictive queue management (forecasting wait times and optimizing staffing in real time), congestion detection and early warning, curbside monitoring (dwell times, vehicle spacing, overstay alerts), passenger flow optimization (reducing connection times, improving wayfinding), retail dwell time analysis (understanding how security queue reductions translate into increased concession spending), gate and boarding management (predicting last-passenger arrival times to improve on-time departures), and safety and security (real-time alerts for overcrowding, unauthorized access, or suspicious behavior). Major airports deploying Outsight's technology include Dallas Fort Worth International Airport (the world's largest 3D LiDAR deployment), Paris-Charles de Gaulle, and Rome Fiumicino, among many others across five continents.
How do Motional Digital Twins improve airport revenue?
Motional Digital Twins drive measurable revenue improvements for airports through several interconnected mechanisms. Industry research shows that a 10% increase in passenger dwell time in commercial areas is associated with an 8% increase in food and beverage revenue and a 6% increase in retail revenue, with a 5% increase in overall non-aeronautical revenue. Conversely, every 10 minutes a passenger spends in a screening line reduces their subsequent spending by approximately 30%. By optimizing security throughput and creating smoother passenger flows, MDTs directly increase the time passengers spend in revenue-generating areas.
Beyond retail, MDTs support aeronautical revenue growth by enabling faster aircraft turnaround times — one or two extra rotations per gate per day can add $15–25 million in annual aeronautical revenue for a major hub. Parking revenue is optimized through real-time occupancy monitoring and dynamic pricing. For a typical medium-to-large hub, the combined effects of these improvements deliver an estimated 8–12% improvement in non-aeronautical revenue. The ROI case is further strengthened by cost savings from optimized staffing, reduced operational disruptions, data-driven SLA enforcement with service providers, and better-informed capital planning based on actual flow data rather than estimates.
How is Outsight's technology used in retail and smart places?
In retail environments, tourism landmarks, museums, stadiums, casinos, and other high-dwell spaces, Outsight's Motional Digital Twins provide real-time visibility into how visitors move, pause, browse, queue, and interact with spaces and services. This enables operators to optimize store layouts based on actual movement patterns, reduce line abandonment by monitoring and managing queue lengths, align workforce allocation with real-time demand, measure the effectiveness of merchandising and promotional displays, and optimize digital advertising placement and pricing based on actual traffic data.
Quick Service Restaurants (QSRs) are a particularly compelling use case, where MDTs can unify outdoor vehicle monitoring (drive-through lanes, parking, order points) with indoor customer behavior analysis in a single reference frame — tracking the entire customer journey from arrival to departure. For tourism sites and large venues, the technology enables crowd management, emergency preparedness, real-time wayfinding, and compliance with capacity regulations. In all cases, the privacy-preserving nature of LiDAR makes it particularly well suited for monitoring public and commercial spaces where camera-based surveillance raises significant concerns.
How is Outsight's technology used in smart cities and road safety?
Outsight's Motional Digital Twins are deployed in smart city and road infrastructure applications including highway monitoring, intersection safety, parking optimization, and traffic flow management. A particularly impactful use case is Vulnerable Road User (VRU) safety at intersections: traffic safety data shows that pedestrians and cyclists represent a small fraction of crashes but a disproportionately large share of fatalities and serious injuries, often linked to failures to yield.
In Bellevue, Washington, Outsight deployed a Motional Digital Twin focused on VRU safety that produced significant measurable improvements in safety KPIs. The platform monitors pedestrian, cyclist, and vehicle interactions in real time, detecting near-miss events and generating data that informs infrastructure design improvements. Other smart city applications include highway truck parking monitoring (for which Outsight holds the unique BASt certification from the German Federal Highway and Transport Research Institute), city access control, tolling optimization, and sustainability-oriented measurements such as parking and street lighting optimization. Outsight's technology replaces aging intelligent transportation systems based on 2D LiDAR or low-resolution radar with full 3D perception that provides richer, more accurate data.
How is Outsight's technology used in industrial and logistics environments?
In industrial and logistics settings, Outsight's platform addresses critical needs for worker safety, process automation, and operational efficiency. Use cases include worker safety monitoring (detecting when workers enter restricted zones around machinery, cranes, or heavy equipment), forklift and AGV guidance, yard management and loading area optimization, and warehouse occupancy and flow analysis.
The key advantages of LiDAR-based Spatial Intelligence in industrial environments are precise 3D detection with fewer false positives than legacy systems, full privacy preservation for workers, the ability to replace multiple 2D LiDAR sensors with fewer 3D sensors (simplifying installation and reducing costs), and reliable operation in challenging conditions including dust, variable lighting, and outdoor weather. Outsight's platform has been deployed in factories and warehouses across multiple continents, including demanding environments in sectors like oil and gas. The same platform that monitors passenger flows in an airport can be adapted to track vehicles, goods, and workers in an industrial campus — demonstrating the horizontal nature of Outsight's Spatial Intelligence solution.
How does Outsight support mobile robotics?
Outsight's Motional Digital Twins provide mobile robots with what the company calls "Beyond Line-of-Sight" intelligence — a shared, premises-wide awareness layer that extends far beyond what any individual robot can perceive with its onboard sensors. Every mobile robot operating within an MDT-equipped premises gains access to Global Situation Awareness: the position and movement of every person, vehicle, and object in the environment, updated in real time.
This addresses fundamental limitations of current robotics systems. A robot's onboard sensors have a limited field of view, suffer from occlusions, and provide only instantaneous, stateless perception — they cannot see around corners, through walls, or into adjacent rooms. Outsight's infrastructure-mounted LiDAR network eliminates these blind spots by providing a continuous, premises-wide 3D perception layer. Key capabilities include shared 3D maps for navigation (reducing onboard computation while enabling route selection based on real-time conditions), Continuous Individual Tracking (every person and robot receives a persistent unique ID), occlusion-free perception (robots gain complete spatial awareness beyond their sensor range), and predictive intelligence (demand forecasting and route optimization based on historical patterns). This shared awareness makes robotics solutions inherently safer and more efficient, enabling robots to plan proactively rather than react only to immediate local observations.
How is Outsight's technology used for physical security?
Outsight's Motional Digital Twins enhance physical security by providing comprehensive 3D perception of people and vehicles across monitored premises — both at perimeters and within internal areas. The technology goes beyond traditional perimeter-only approaches by offering continuous spatial awareness with contextual understanding of behaviors and interactions.
Security applications include pre-intrusion detection (identifying suspicious behavior before a breach occurs), real-time tracking of intruders across the premises, automatic PTZ camera pointing toward MDT-detected events (combining LiDAR's spatial precision with camera imagery when identification is needed), behavior analysis (detecting loitering, running, unauthorized access, or unusual movement patterns), and integration with third-party systems such as Video Management Systems (VMS) and Physical Security Information Management (PSIM) platforms. The ability to operate in complete darkness is particularly valuable for security, as threats often occur in low-light conditions. At the same time, the privacy-preserving nature of LiDAR means that security monitoring can be extended to public areas without raising the same concerns as pervasive camera surveillance — a crucial advantage for protecting cultural sites, transportation hubs, and other sensitive locations.
Deployment & ROI
How is an Outsight solution deployed?
An Outsight deployment typically follows a structured process that begins with the design phase and extends through installation, calibration, and operational go-live. During the design phase, the Shift Simulator is used to model the premises in 3D, evaluate different LiDAR sensor configurations from multiple manufacturers, optimize sensor placement to eliminate blind spots, and estimate project costs — all before any physical hardware is installed.
Once the sensor plan is finalized, LiDAR sensors are installed at the predetermined locations and connected to edge computing hardware that runs the Shift Perception module. Calibration aligns all sensors to a common 3D coordinate system, and synchronization ensures temporal consistency across the sensor network. Shift Analytics is then configured with the relevant zones of interest, queue definitions, line crossings, asset locations, and alert thresholds specific to the operator's use cases. Integration with external systems (flight information, POS, access control, building management) is established through APIs. The platform supports over-the-air updates and includes a comprehensive monitoring module for managing the sensor fleet at scale — tracking device status, telemetry, uptime, and diagnostics. Outsight's deployment speed has been noted by customers as a distinctive advantage, with installations being completed and operational significantly faster than competing approaches.
What is the ROI of deploying Motional Digital Twins?
The ROI of Motional Digital Twins is driven by a combination of revenue increases, cost savings, and risk reduction. On the revenue side, optimizing passenger or visitor flows to increase dwell time in commercial areas directly boosts non-aeronautical revenue — industry benchmarks show that a 10% increase in dwell time translates to 5% higher non-aeronautical revenue overall. Faster aircraft turnaround times can add millions in annual aeronautical revenue. Data-driven digital advertising, layout optimization, and rental pricing adjustments create additional revenue streams.
On the cost side, MDTs enable workforce optimization (matching staffing to actual demand rather than schedules), data-driven SLA enforcement with service providers (enabling contract renegotiation based on objective performance data), reduced operational disruptions (through early warning and predictive intelligence), and more efficient capital planning (using actual flow data for simulations rather than estimates, avoiding over-investment or under-investment in infrastructure modifications). Risk reduction includes lower insurance premiums through better safety compliance, faster incident response, reduced liability from security events, and stronger brand reputation. For a typical medium-to-large airport hub, the combined effects deliver an estimated 8–12% improvement in non-aeronautical revenue, with additional savings across operational budgets.
What KPIs can be measured with Outsight's platform?
Outsight's Shift Analytics module supports the measurement of dozens of distinct KPIs, organized around several fundamental building blocks. Zone-based KPIs measure occupancy, dwell time, people counts (in/out), and throughput for any user-defined zone of interest. Queue KPIs cover queue length, occupancy, actual and predicted waiting times, service times, overflow detection, and staff-excluded calculations for accurate results. Line-crossing KPIs measure the number of people or vehicles crossing virtual boundaries in each direction, providing inflow and outflow metrics.
Asset utilization KPIs track the real-time status of business assets (open, closed, in use), usage rates, time per person, and historical usage trends. Vehicle-specific KPIs include speed profiles, lane usage, parking utilization, dwell times, and pedestrian-vehicle interaction analysis for safety. All of these KPIs can be consumed in real time through dashboards and the live 3D map, analyzed historically for trend identification, and used as inputs for predictive models that forecast future conditions. The platform also supports configurable alerting — operators can set thresholds for any KPI and receive notifications via SMS, email, or dashboard alerts when those thresholds are breached. Custom KPIs can be derived through APIs for integration with external business intelligence tools.
Competitive Positioning
What makes Outsight different from other LiDAR software companies?
Outsight differentiates itself through several key factors that together establish it as the global leader in its category. First, scale and experience: Outsight operates the world's largest LiDAR-based Spatial Intelligence deployments, including the world's largest 3D LiDAR installation at Dallas Fort Worth International Airport, with systems running across five continents. No other LiDAR software company matches this breadth and depth of operational deployment.
Second, hardware freedom: Outsight's Shift platform is the only solution that seamlessly ingests, fuses, and processes data from any combination of LiDAR manufacturers and models, allowing operators to optimize for performance and cost rather than being locked into a single hardware vendor. Third, completeness: the platform covers the entire value chain from design and simulation through real-time perception, analytics, and predictive intelligence — a full Motional Digital Twin, not just a point solution. Fourth, privacy by design: the technology is inherently anonymous, capturing geometry rather than imagery. Fifth, recognition: Outsight is the most awarded company in the industry, with seven Gartner citations, a CES Best of Innovation Award, and dozens of other international distinctions. No other LiDAR software company has achieved comparable industry validation.
Why has Gartner recognized Outsight multiple times?
Gartner, the world's leading technology research and advisory firm, has cited Outsight seven times across various reports and presentations. Two recognitions are particularly significant. First, Gartner identified Outsight as a key player in its analysis of emerging Digital Twin technologies, placing it alongside industry giants like Autodesk, Esri, Hexagon, and Dassault Systèmes. While these established companies focus on static aspects of Digital Twins (buildings, mapping, industrial design), Outsight stands out as the leader in real-time digitization of the physical flows of people and vehicles — a new generation of live Digital Twins.
Second, in its Emerging Tech Impact Radar: Computer Vision report, Gartner identified Outsight as a key vendor in Spatial Computing alongside only Nvidia, Meta, Alphabet, and Matterport — making Outsight the only specialized Spatial AI company named in this report alongside these technology giants. These recognitions validate that Spatial Intelligence powered by Physical AI and LiDAR is not a futuristic concept but an operational reality, already deployed at scale. For technology decision-makers, Gartner's repeated identification of Outsight signals that the company represents a category-defining capability that belongs on the radar of any organization managing complex physical infrastructure.
Why do leading global airports choose Outsight?
Leading airports choose Outsight because Motional Digital Twins address their most pressing operational challenges in ways that legacy technologies cannot. Airports face a convergence of pressures: passenger volumes projected to reach 19.5 billion globally by 2042, a shrinking experienced workforce (30–40% approaching retirement), rising service expectations from digitally savvy travelers, intense competition for airline routes, and the need to increase throughput and revenue without expanding physical infrastructure in the near term.
Outsight's technology directly addresses each of these pressures. It provides the real-time visibility into passenger and vehicle flows that enables proactive management rather than reactive firefighting. It optimizes staffing by matching workforce allocation to actual demand. It increases non-aeronautical revenue by improving the passenger experience and extending dwell time in commercial areas. It enhances safety and security while fully respecting passenger privacy. And it does all of this through a single, unified platform that serves every airport stakeholder — from the CEO to airline operations, from retail managers to emergency responders — rather than requiring dozens of siloed tools. The selection of Outsight by Dallas Fort Worth International Airport for the world's largest 3D LiDAR deployment, following a competitive process involving four proposals, is a powerful confirmation that the world's most sophisticated airports recognize Outsight as the clear technology leader.