The road to autonomous driving is built on Advanced Driver Assistance Systems (ADAS), and new applications focused on safer driving environments for drivers, passengers, and pedestrians.
It's an engineering environment comprised of forward-looking machine vision cameras, video sensors, radar, and light detection and ranging (LIDAR) sensors. These systems produce streams of raw video data - much of it need in duplicate or multiple copies for machine vision, viewing, parallel processing and data logging applications.
Replicating sensor data can take place in several different areas along the video design path. For instance, It's possible to connect each sensor via separate cables to both machine-vision and data-logging electronic control units (ECUs). But this approach doubles the number of required cables.
A chip-level solution from TI bypasses this issue by splitting the data after sensor data aggregation. The DS90UB964-Q1 quad deserializer hub can aggregate raw data from up to four different sensors and create two copies of the combined data without external components such as splitters and bridge chips. In this instance data is received and aggregated into an MIPI CSI-2 compliant output for interconnecting to a downstream processor. A second MIPI CSI-2 output port is available to provide additional bandwidth or offers a second replicated output.
Machine vision algorithms (such as object recognition) can process one stream while recording another to memory for data logging. Furthermore, the connected sensors need not all be the same; combining multiple sensors of different type, resolution, and speed can enable true sensor fusion systems. For example, you can merge data from a combination of cameras with different frame rates as well as separate radar sensors.