As sensor technology advances and becomes more cost-effective, these devices are added to human interfaces and, when coupled with machine learning, they make systems or gadgets more conscious of what the person is doing, concerning to do, or has just finished. These sensors also recognize what is happening in the vicinity of the user or the device. Nevertheless, a worry that is becoming more and more prominent is the potential concerns of privacy and accountability that could eventually impede the progress of sensor technology and its usage.
Are we foregoing the improved safety features due to the fact that they are not faultless? Are new inventions and areas of application being neglected due to liability or privacy worries? Neglecting this, what prospective advanced usability and features can be implemented in the vehicle, and what would it take to turn them into a reality?
Sensors Behind the Scenes
To explain how the innovative use of sensor technologies has already changed our lives, let’s look at early mobile phones and laptops. In the case of mobile phones, the user turns the screen to get portrait or landscape images using motion sensors. By measuring gravity with an accelerometer, the phone can determine whether it’s facing vertically or horizontally and adjust the display accordingly.
Adding more “senses” have made devices more aware and the experience they deliver more natural and compelling. When a smartphone is asleep on a table, the screen turns on by the time we pick it up to look at it.
Phones combine the input from a gyroscope with the data from an accelerometer during the pick up, making it possible for a phone to determine if its final position or pose corresponds to one where the screen is oriented toward the user’s face. If so, the device can pre-heat from its low-power sleep mode to get the processor, screen, and radio up and running, enabling a faster response for the user. The screen is “magically” turned on, enhancing the user experience with the device.
Alternatively, the combined sensor data can indicate the device was likely put into a pocket instead. Thus, the device returns to the low-power sleep mode, saving precious power.
Microphones allow many devices to listen for voice commands that warrant a response. “How can I help you? What’s going on?” Always listening is an example of how sensors change the way we interface with our devices, creating a more human-like experience.
Ultrasonic time-of-flight (ToF) sensors can measure the range from where the sensor is mounted to detect an approaching object. ToF sensors can be used with ML to detect user presence near a laptop. As someone approaches, the screen lights up. Conversely, if we walk away and leave the room, the laptop locks the screen and goes into low-power mode, thereby increasing the security of our information and saving power.
Automotive Sensor Potential
The use of sensors in the car has, until recently, been primarily driven by safety applications outside of the vehicle, such as collision avoidance, lane departure warning, adaptive cruise control, and vehicle stability. Much like smartphones and laptops, though, the car is now becoming a device where sensors enhance usability. Most modern cars have a microphone at least for hands-free calling but, in some vehicles, occupants also can ask the infotainment system for directions, to change the playlist, or temperature.
Sensor inputs in the automated cabin enable us to make decisions and take actions. As more sensors are added, the data is combined, otherwise known as “sensor fusion.” Newer systems will add ML to build scenario models, collecting multiple sensor data to enable the processor to recognize certain patterns and identify what’s happening. Examples include:
Smarter airbags: Adding pressure sensors, primarily in the front seat cushions, to assess the size and weight of the occupant enabled next-generation airbags to adjust settings accordingly. For instance, if the occupant is small and lightweight, the airbag will inflate with a different force than if the occupant were bigger. As a bonus, if there’s no front-seat passenger, the system will only deploy the necessary airbags based on the passengers and collision. If the car isn’t a write-off, this saves on an expensive and unnecessary repair.
Driver alertness:Driver fatigue monitors, also called driver alertness systems, sense fading concentration levels. Some earlier systems monitored driver behavior by sensing erratic steering-wheel movements or lane deviations. If the system suspects a high risk of the driver being drowsy, an audible signal is sounded to raise the driver’s alertness.
Similar to the laptop example, ToF sensors could be used for driver alertness, mainly to understand whether the driver is moving or not. Temperature sensors can monitor in-cabin temperature, with the idea that the driver will more likely start to feel drowsy as it gets hotter.
Today, many automakers incorporate cameras in the cabin for driver monitoring systems (DMS). With the camera focused on the driver’s face, the system can be used for other applications besides driver alertness. Through facial recognition, the DMS can identify the driver, enabling the car to automatically restore their preferences and settings, such as seat position and preferred radio channel. It also can monitor driver attentiveness, ensuring they keep their eyes on the road ahead.
Occupancy detection:Using a ToF sensor, in combination with a microphone, a pressure sensor in the back seat cushion, and an in-cabin temperature sensor, we can now determine whether that object in the backseat of a car parked in the sun is likely to be an occupant and take some action. Remember, any one of those sensors alone could have a false positive.
When we fuse the sensor inputs together, we can be much more sure about the circumstance and determine the appropriate response, making another strong case for the trend in sensor fusion.
Some cars now can send a message to the driver’s smartphone app: “Hey, did you leave something in your car?” If the driver doesn’t respond, the system’s response could be to roll down the windows, start beeping the horn, and flashing its lights—hopefully, to attract someone’s attention and prevent tragedy.
Of course, sensor fusion enhances confidence in the response we take and avoids false positives or negatives. We wouldn’t want the windows rolling down unnecessarily or for the occupancy detection system to fail in other ways. Because of this uncertainty, however small, liability concerns are growing about these safety features that can’t be 100% effective.
Sensor Implementation Issues
A sure-fire way of assessing whether a child or dog is locked in a parked car would be to train cameras on passengers. It’s reasonable to note that DMS cameras are already being adopted, and expanding the use of cameras to monitor passengers ought to be considered.
Although the technology exists, much of it cannot be implemented yet and still doesn’t conform to existing legal, ethical, and societal constraints. There are privacy concerns. A component of that concern is malicious hacking. Will anyone be able to activate a feature when it shouldn’t be? Can anyone access an image of a child from the camera
Is it possible to work with citizen groups, the insurance industry, the legal profession, and regulatory agencies to clear paths for the adoption of new safety features and technologies? Can they all work together to agree on what’s reliable (in terms of safety statistics and system security), what’s effective enough, and provide sufficient respect to privacy concerns?