Heute 689

Gestern 1131

Insgesamt 39679051

Samstag, 26.10.2024
Transforming Government since 2001

Tech companies seem to expect that we will continue to tolerate this inundation by their various ‘solutions,’ all while not realizing that many of these ‘disruptive innovations’ create more problems.

Tech companies want to automate our communities—in particular, our transportation and mobility systems. Unfortunately, we’re not well-prepared—physically or psychologically—to adjust to the onslaught.

From the latest developments in driverless vehicles in San Francisco to the reintroduction of robot dogs in New York and the growing push for delivery drones across the U.S., it’s clear that Silicon Valley firms aspire to remove as many people from the transportation equation as possible—unless we pay them for a ride. They’ve disrupted communities and traffic with ride-sharing services such as Uber and Lyft, but this seems to be a next step: removing the humans that not only need income, but also provide a level of interface between the tech companies and their consumers.

Tech companies seem to expect that we will continue to tolerate this inundation by their various “solutions,” all while not realizing that many of these “disruptive innovations,” may create more problems. This is particularly applicable for those of us who are not able to take advantage of these devices and services and instead have to change our behavior to adapt to them. As more and more technologies cross our paths, this may prove to be too much for us.

It all comes down to “divided attention” and “inattentional blindness,” two interrelated concepts from cognitive psychology. Divided attention describes the state when our attention is split—as it would be if we were driving (or walking) down the street and talking on our phone. We can’t focus on doing both tasks at the same time, so we rapidly cycle between them. Inattentional blindness is the result of that division, causing us to miss objects and people in plain sight.

Both driving and walking are social tasks. We are constantly negotiating space and distance with each other, and working cooperatively to yield or take right-of-way. The roads and public sidewalks are full of people with divided attention and inattentional blindness. We don’t see other cars or pedestrians when we are driving—we run through stop signs, or we bump into others or fixed structures in public spaces while we walk.

But since we are a cooperative species who live in groups, the social ways that we’ve evolved encourage us to help others. If we see someone who is distracted on the road or sidewalk and facing danger, we pull them out of the way, yell, or honk to alert them so they’ll stay safe. As a group, we rely on predictable behaviors, too, so we have stability. Our laws help to reinforce predictable behaviors, and even when people need (or feel they need) to disobey them, we have figured out signals and gestures to communicate intent and ask for yielding from the other party, etc. to negotiate a compromise—or move out of the way. It doesn’t work perfectly, but as long as everyone isn’t trying to disobey rules and law at the same time, we manage.

At present, these automated additions to our transportation spaces are not group-oriented or social—not with each other, nor with us. Instead, these autonomous vehicles and robots (and likely, soon to be drones), are individually focused and primarily task driven. They lack flexibility and can’t warn us of danger when we aren’t paying attention in the social way that humans can; they can’t negotiate space or needs, either. And there are many of them—all deployed from different vendors—with different algorithms. This means that their behaviors are particularly unpredictable and threatening to us.

Take drones, as an example. Airports are predictable places for flight. There are regulations to keep us and other aircraft safe from accidents. In contrast, drones can take off and land anywhere. In an urban environment, we have no idea if they will stay up or fall on us—or hit objects or birds that would then fall on us, etc. Autonomous vehicles and mobile robots carry similar but different threats. We can’t make eye contact with them before we cross a street in front of them, or if we want to suddenly change directions—they are nonnegotiable entities. This makes them anti-social, which makes them not part of our group—which makes them not just threatening in concept, but threatening in practice.

It isn’t just that we have divided attention and the AVs don’t. It’s that pedestrians have divided attention about some things, human drivers have divided attention about some things, and the AVs have anti-social self-centered attention (mostly). The combination of these elements is the problem. But while people help each other (and robots) out of divided attention, AVs and robots don’t.

People are actually great at adapting because we can socially negotiate—but we’re nearing a tipping point when there will be too many of these things, with too many different algorithms to adjust to—it will become nearly impossible to successfully socially adapt.

These systems and machines rely upon inferences from captured “data” and learning (e.g., a red light means “stop”) and the predictable habits or “rule” structures of an imagined and stereotyped behavior of people outside of a group. They relate one-to-one, which is not solely how we function. We are now being required (not asked) to yield constantly to cooperate with all of these tech companies’ automated experiments individually. We will have to watch out for them, get out of their way, and safeguard ourselves as they do not have the flexibility and sociability to engage with us. We’re still figuring out how to get along with the diverse ways of being between humans that exist within a society. These systems don’t have human needs, and seem uncooperative and anti-social.

In some ways, the pandemic lockdowns encouraged automation experiments. Empty streets devoid of people falsely created a world where these types of automated systems could work—so long as they were owned by single companies and had complete domination over the road space. This may be what technology companies envision, but it is not the reality. We aren’t all going to stay home. We are going to drive our cars, by the vendors that we like, in the social and cultural ways we learned to drive, which is not the simulated algorithmic AI monocultural way that AVs and other automated entities navigate. We are going to walk, too, and will likely be distracted doing both. With all these automation developments targeting mobility cities, there is now an automated robot layer, and the human layer, and the human layer is yielding (and will continue to yield) to the robot layer.

Machines are limited in their abilities to cooperate with us. Because they can’t work with us in many ways, we work with them—and their limitations. This matters because we are about to be inundated with not just autonomous cars or drones or robot dogs or mobile robots, but by cars and drones and robot dogs and mobile robots all at the same place and the same time.

The shutdown of overt Smart City development has been followed by covert smart city development through transportation. We use our phones to run our lives, to navigate and send messages, to infer our next moves with smart agents like Siri—and technology companies are harvesting that data and offering more and more automation to us. We just simply do not have the physiological sensors to monitor all that is around us—especially when we are plugged into our phones and what they are blasting us with.

We should be concerned that people are becoming the infrastructure required to support the profits and success of the technology companies’ automated experiments. The toll it is taking on us already (with just phones) has been profound. As a response, tech companies have offered even more automation, with ex-Google chief Eric Schmidt suggesting that tech companies regulate themselves. But since many tech companies have fired their ethics teams, this proposal does not support the needs of the general public. Furthermore, more automation, such as that controlled by AI, cannot solve this and will make matters worse. The false assumptions and wrong ideas that the algorithms will have about how to forecast for us will lead to more mistakes they will make, and the systems in place for us to alert tech companies about their failings are automated too.

This means that we will have to perpetually deal with these automations for free, while the tech companies leverage our good-will labor and sociability, while taking our jobs, time, and attention.

---

Autor(en)/Author(s): S. A. Applin

Quelle/Source: Fast Company, 21.06.2023

Bitte besuchen Sie/Please visit:

Zum Seitenanfang