Helm.ai
We license AI software throughout the L2-L4 autonomous driving stack, perception, intent modeling, path planning, and vehicle control. Highest accuracy perception and intent prediction, leading to safer autonomous driving systems. Unsupervised learning and mathematical modeling, instead of supervised learning, allow learning from huge datasets. Our technologies are up to several orders of magnitude more capital-efficient, enabling much lower cost of development. Helm.ai full scene vision-based semantic segmentation fused with Lidar SLAM output from Ouster. L2+ autonomous driving with Helm.ai across highways 280 to 92 to 101, lane-keeping + ACC lane changes. Helm.ai pedestrian segmentation, with key-point prediction. Pedestrian segmentation and keypoint detection. Rain lane detection corner cases and Lidar-vision fusion. Full scene semantic segmentation, botts dots, and faded lane markings.
Learn more
Ansys VRXPERIENCE Driving Simulator
Discover an open, scalable and modular virtual driving simulator that enables testing against a variety of objectives and performance requirements. Ansys VRXPERIENCE Driving Simulator Powered by SCANeR™ enables you to assemble scenarios, test software, consider vehicle dynamics and experience sensors within a virtual driving environment. It enables a fully virtual driving lab for analyzing performance results. VRXPERIENCE Driving Simulator offers an immersive simulated test-drive experience set within a representative world. Perform exhaustive safety assessments to drive millions of virtual miles in days and accelerate development by 1,000x compared to physical road-testing. As passenger automobiles become more digitalized and more autonomous, they require a wide range of advanced technologies that include sensors, such as cameras, radar and lidars, as well as embedded software supporting automated control systems.
Learn more
Apollo Autonomous Vehicle Platform
Various sensors, such as LiDAR, cameras and radar collect environmental data surrounding the vehicle. Using sensor fusion technology perception algorithms can determine in real time the type, location, velocity and orientation of objects on the road. This autonomous perception system is backed by both Baidu’s big data and deep learning technologies, as well as a vast collection of real world labeled driving data. The large-scale deep-learning platform and GPU clusters. Simulation provides the ability to virtually drive millions of kilometers daily using an array of real world traffic and autonomous driving data. Through the simulation service, partners gain access to a large number of autonomous driving scenes to quickly test, validate, and optimize models with comprehensive coverage in a way that is safe and efficient.
Learn more
Aurora Driver
Created from industry-leading hardware and software, the Aurora Driver is designed to adapt to a variety of vehicle types and use cases, allowing us to deliver the benefits of self-driving across several industries, including long-haul trucking, local goods delivery, and people movement. The Aurora Driver consists of sensors that perceive the world, software that plans a safe path through it, and the computer that powers and integrates them both with the vehicle. The Aurora Driver was designed to operate any vehicle type, from a sedan to a Class 8 truck. The Aurora Computer is the central hub that connects our hardware and autonomy software and enables the Aurora Driver to seamlessly integrate with every vehicle type. Our custom-designed sensor suite—including FirstLight Lidar, long-range imaging radar, and high-resolution cameras—work together to build a 3D representation of the world, giving the Aurora Driver a 360˚ view of what’s happening around the vehicle in real time.
Learn more