erik stayton sensing, seeing, and knowing: the human and...

Download Erik Stayton Sensing, Seeing, and Knowing: The Human and ...cinema.usc.edu/spectator/36.1/2_Stayton.pdf · with touchscreen displays.3 Backup cameras and ... rather than as a description

If you can't read please download the document

Upload: dinhthu

Post on 09-Feb-2018

216 views

Category:

Documents


1 download

TRANSCRIPT

  • Technologies of KnowingSonia Misra and Maria Zalewska, editors, Spectator 36:1 (Spring 2016): 8-24.

    8

    Erik Stayton

    Artifacts serve many purposes and can be seen through many lenses and disciplinary perspectives. Consider the automobile, camera, computer, television, and game platform: each is at once a technological, social, and media artifact. Though the essential core of the car has not changed, our vehicles now serve a wide variety of other purposes through additions to their design: connectivity, communication, entertainment, and navigation. Though some of these purposes are at least as old as the addition of radio to the automobile,1 the technological makeup of the modern car is becoming more computerized, and more complex, with each new vehicle generation.

    Cars are becoming screens. This movement has already begun, as companies replace physical

    gauges with luminescent panels that reproduce mechanical gauges in form, or represent new kinds of information.2 Center consoles, long home to rows of buttons and knobs, are being replaced with touchscreen displays.3 Backup cameras and side-mirror cameras present live video feeds on the dashboard so that drivers can better judge their vehicles position using perspectives normally inaccessible to the human being.4 Dashboard-based navigation screens provide turn-by-turn instructions, traffic and hazard warnings, and access to information about nearby businesses. Children sitting in the back seat face small monitors built into the seatbacks, a modern, mediated solution to the old problem of the perpetually asked, Are we there yet?

    Sensing, Seeing, and Knowing: The Human and the Self-Driving Car

    AbstractLike other information technologies, autonomous vehicles depend upon, create, and transmit representations of knowledge: knowledge about users, goals, outcomes, and the physical world. But there is a paradox at the heart of modern media and information technologies: that software must simultaneously know and not know, and companies (such as Google, with the right-to-be-forgotten case) must sometimes strive not to see the very things that allow their systems to operate.

    Developments in driverless vehicle technology are ushering in a new chapter in human history. As we learn to live with two-ton robots traveling next to us at speed, we will need to develop new legal and ethical understandings to deal with these inanimate agencies and their paradoxical nature. In order to function, they will need to collect data constantly: not only their own position, but the locations of other cars and pedestrians, the time and destination of the trip, even the number of passengers. Demands for convenience and safety--not to mention corporate demands for advertising revenue--may well dictate that the cars know who is in them at all times. But increased functionality is predicated on an invasion of privacy, a transgression of existing informational norms. What is sensed, and what can be transmitted back to stationary servers--with which these vehicles will likely be in constant contact--is of paramount importance to a consideration of automated vehicles as media technologies, and to the social regulation of this technology.

    This ideology of information gathering is rooted in the fields that have come together to create self-driving vehicles: artificial intelligence, computer vision, GIS and mapping, and statistical science. But how we must understand our coming transportational interlocutors is, as yet, unclear. Through a consideration of the hidden ideologies of autonomous vehicle research, I explore this question and its opposite: through what processes and to what ends will these systems and the companies that run them know or encode us, as users giving commands, passengers, pedestrians, and fellow drivers? I conclude that the sort of knowing that these systems engage in is deeply dependent on their expected roles and control philosophies, which must be subject not only to technical requirements but cultural goals and social expectations.

  • 9

    STAYTON

    TECHNOLOGIES OF KNOWING

    Freed from the necessity of hosting a human driver who is involved in the mechanical components of the driving task, the driverless car concepts at the Consumer Electronics Show in Las Vegas, in January 2015, take this vision further. Mercedes-Benzs F015 autonomous car concept forgoes the traditional seating arrangement in favor of rotating chairs that turn the center of the car into a lounge or meeting space.5 The cars side windows are relatively minimal, its doors taken up primarily by touch screens. In fact, the interior is covered in touchable displays, set up for an immersive digital experience. Its publicity photos show serious-looking, young white businesspeople in uniform grey work clothing.6 The environment is high tech and sharply clinical. Far from an exuberant depiction of the promise of media technology in the automobile, this future is so serious as to be dull: a homogeneous work space bleeding out into other parts of life.

    Autonomous vehicles are, in this albeit limited view, positioned as new consumer media devices first and foremost, transforming the task of actively driving into the interaction with, or passive consumption of, mediated content. This perspective stands to shape the future development of these vehicles and the consumers expectations of them, but it is not the only way in which driverless cars can be seen as media devices, and several other perspectives run far deeper beneath the surface. Though the extreme vision of autonomous vehicles present in concepts like Mercedess F015 may not ever come to pass, autonomous and semi-autonomous vehicles will still be, by necessity and by design, media technologies par excellence precisely because they depend upon the receipt and presentation of information through media interfaces: networks, cameras, and screens.

    Lessons in Information Politics

    Examples from other networked technologies are highly instructive in relation to the issues surrounding autonomous vehicles as media technologies. In May of 2014, the European Unions Court of Justice ruled in the landmark Costeja decision that since Google is processing personal data, and acting as a data controller, it may be compelled to remove links to pages containing personal information

    from its search results.7 The EU ruling, in deference to European tradition and contrary to that of the United States, places an individuals rights to privacy above the ability of users to access information online.8 There is growing recognition that publicly available data can be highly sensitive and that it may be beneficial to allow individuals certain legal rights to control their own electronic reputations, at least in particular circumstances. Costeja opens the floodgatesfrom previously limited, targeted removals through court cases and copyright lawto the possibility of free and open public removal of public information. But much of our information is even harder to control.

    This story connects autonomous vehicles to a larger media landscape while dramatizing a key contradiction at the heart of modern media systems: that companies must strive not to see some of the things that allow their systems to operate. Googles search technology depends upon a comprehensive network: the connections within the network determine the relevance of each source, and additional sources increase relevance toward the ideal result of a perfectly accurate, omnipotent natural-language database of the Web. This reliance can be traced back to the original Google PageRank paper from 1998.9 Another paper a year later10 describes a method for automatic extraction of information from the Web, relying on the identification of certain data patterns for a small set of known objects, and using those patterns to discover a greater object set which can then be used to identify further characteristic patterns. These sorts of large-scale statistical knowledge-gathering approaches are designed to deal with large amounts of untagged content, and are ideologically aligned with unfettered, open access (information that is protected or removed cannot be indexed, and is therefore useless).

    Despite continued work, online tracking regulations and restrictions will likely continue to be elusive unless a legislative body, such as the EU Court of Justice, steps in with further legal rulings.11 Elsewhere among data-driven business models, increases in functionality are predicated on invasions of or encroachments on what we used to think was private, and represent increasingly invasive data collection and sharing at a massive scaleoften this information is used internally to

  • 10 SPRING 2016

    SENSING, SEEING, AND KNOWINGimprove services, but it may also be aggregated and sold to third parties, and in either event may be stolen or leaked by disgruntled employees or thieves. It has become a truism on the web that when something online is free, youre not the customer, youre the product,12 but the lure of information is such that ones personal data is becoming an ever more valuable asset even to companies for whom we are ostensibly already the customers. Information security issues are leaving the browser and entering the physical world. Ride-sharing company Uber has recently discussed proposals to share aggregated and anonymized ride information with city governmentsstarting in Bostonas a way to allow cities to better understand commuter patterns and, presumably, to curry favor with authorities that might otherwise attempt to shut the service down.13 This example is particularly important as a precursor of things to come in the autonomous vehicle space, as such vehicles will allow this data and more to be collected and shared with other entities.

    The paradox of information technologies that these cases dramatize is of the utmost importance as computing technologies move deeper into everyday devices. What is sensed, and what can be appropriately transmitted back to servers for processing and storage, is of paramount importance: this is a question of privacy in context.14 Following Nissenbaum, privacy must always be seen in the context of particular users and a particular use.15 It is not that data about our commuting routes should never be collected, but that collected data should not be sent uncritically to any third party without our knowledge or consent, a violation of our information norms.16 There may be legitimate uses for certain types of sensitive information, but while providing it to municipal governments specifically to assist in city planning may be legitimate, selling it to advertisers to help them design more effective billboards may not be. Privacy issues involving motor vehicles are likely to become much more complicated as vehicles become able to record more, and potentially know more, about their passengers.

    Networks and Networked Data

    With what networks, and for what reasons, will autonomous vehicles be connected? The term

    autonomous might suggest that these vehicles will operate without network connections, but vehicle autonomy is meant as non-reliance on human input, rather than as a description of the vehicles disconnection from other information systems. While certain guidance systems have been highly autonomous in the informational senseinertial guidance systems for intercontinental ballistic missiles come to mind in particular,17 and Googles vehicles themselves use inertial navigation aids18much so-called autonomous navigation depends on access to global positioning satellites. Googles driverless car technology in particular depends on highly accurate, and very expensive,19 differential GPS technology.

    There is still space for significant debate about the roles of human beings in so called self-driving systems. Despite numerous news articles suggesting otherwise,20 humans will be involved in the driving process, in at least some ways, for the foreseeable future. Deciding exactly what ways those are is a matter of both engineering practice and appealing to the market and its squeamishness about computer-controlled driving.21 But from another point of view, autonomous vehicles, regardless of the role of the human, will be anything but autonomous in practice. They will be networked. GMs OnStar service already connects vehicles to central servers for purposes of safety, security, and convenience. It can automatically alert the authorities in case of an accident or theft, and can in principle be used to locate the vehicle at any time. The service also provides vehicle diagnostics and a connection to a tablet or smartphone that allows the owner to configure settings, lock and unlock doors, and even operate the lights and horn from any location.22 While these vehicles are not (yet) autonomous, OnStars capabilities suggest features that will become more common in highly connected and computerized vehicles.

    Vehicles that can receive information from each other and from the roadway are a top priority for the NHTSA, and have been on the research agenda for decades.23 General Motorss 1950s Firebird concept vehicles sold themselves on this research, despite the fact that Firebird II had no automated capabilities whatsoever.24 GMs promotional films suggested the vehicle could be controlled electronically from traffic control

  • 11TECHNOLOGIES OF KNOWING

    STAYTONAutomation, Bryant Walker Smith attacks the idea that automated vehicles necessarily imply vehicle-to-vehicle information connections, but concludes that automated vehicles will likely be connected to external serversto provide maps, traffic data, and to coordinate operations of a large fleet of vehicles.32 MITs Ryan Chin suggests that a likely approach to make driverless cars affordable will be to locate more of the information processing system in more-easily-replaced external servers that are not space constrained as the ones inside a vehicle would be.33 Two-way information flows will be the rule, not the exception, driven by the commercial necessity to reduce costs and increase capabilities.

    These descriptions of autonomy research foreground the role of autonomous vehicles as media technologies of the future, in the sense that they are interactive presenters and receivers of information, deeply enmeshed in issues of seeing and knowing. These vehicles must display information, including visual information; they must be interfaces between humans and the road or, as Bill Mitchell would have it, the city.34 Though the trend within Googles autonomous vehicle research is to minimize the human machine interface, as evidenced by their move toward small vehicles with only a go or stop button,35 other companies working on automated vehicles are increasing the information provided to the driver through electronic safety systems (such as blind spot monitoring systems or night vision), and these systems are prime targets for inclusion in future autonomous cars. David Mindell at MIT, a prominent researcher in human factors and machine autonomy, suggests that the future of autonomous vehicle interfaces lies in providing the driver with a fuller view of the road via the cars sensors, and representing the vehicles intent on that display.36

    With digital screens already present, such views might well be complemented with targeted advertisements. Google has so far remained relatively silent about advertising in its autonomous vehicle patent applications, discussing only the possibilities of selling information to businesses about how many cars arrive at particular locations during a day, or make U-turns just before getting there.37 But there is potential to do far more with the available information, and data collected by the vehicle to make its own functioning possible could be made to serve other purposes. Information

    towerslike those in use in aviationplaced along major highways: the car was under the direction of an electronic brain on a dream highway of the future.25 This would have been the realization of the New York Worlds Fair exhibit from two decades earlier: GMs Futurama from the 1939 Worlds Fair depicted cars that maintained distance from each other via a sophisticated system of radio control.26 Also in the 1950s, RCAs Vladimir Zworykin, a lead inventor of television technology, was working on an intelligent road system of his own. His concept, inspired by railroad block signals, used circuits embedded in the road to magnetically sense vehicle speed and location, placing sensing and coordination capabilities outside of the vehicle, in road-side systems.27 Zworykins centralized planning model would send instructions to individual cars, and a 1/40th scale demonstration system was built for the 1960 Highway Research Board meeting in Washington D.C.28

    Current proposals pick up on elements of these sixty-year-old dreams. Intelligent vehicle-highway systems (IVHS) research also attempts to locate some of the intelligence in the roadway, and as part of this research drive the NHTSAs 2013 Preliminary Statement of Policy Concerning Automated Vehicles connects forthcoming driverless systems to research in vehicle-to-vehicle (V2V) communications.29 These communication links could inform drivers, prevent crashes between electronically-controlled systems, or improve traffic flow on a system-wide scale. The Department of Transportation is investigating the use of Vehicle Safety Communications to improve roadway safety, but anonymity or security measures are not prevailing as part of current proposals in these areas.30 If cars are networked with other vehicles, traffic lights, roadways, or city infrastructure, recorded data may be leaking into all manner of systems. Networked technologies do not have a great security record, and there is no reason to believe these will be any different.31 Even if vehicles are not networked with each other, central servers will be a source of constant information, as they already are for maps applications on smartphones and computers. But some external processing power may also be used to allow automated vehicles to function more efficiently. In his A Legal Perspective on Three Misconceptions in Vehicle

  • 12 SPRING 2016

    SENSING, SEEING, AND KNOWINGa safety culture which has decided technological surveillance and enforcement are appropriate responses to the mere potential of criminal behavior. Whether or not protecting against these threats is an appropriate use of this information is a matter for societal judgment, but such proposals, if enacted, would require these vehicles to have unprecedented levels of very sensitive knowledge about people and their lives: biometrics, criminal histories, family and trust networks.

    Data and Data Subjects

    So how will these devices know us? First, as we have already seen, we will be data subjects, accumulations of information about behaviors and patterns collected by our vehicles. We live in a surveillance climate in which the frequency and nature of monitoring is changing: becoming automated, undiscriminating, and accommodating new subjects, monitors and motives.44 Playing directly into the ideologies of Big Data and statistical science, automated vehicles provide avenues for the collection of more data, driven by the desire for more and better predictions. Rather than trying to account for such subjective things as free will and individuality, statistical techniques classify people into categories based on various criteria, and the assumption that if other people in your category exhibit certain behaviors, you are likely to do the same.45 In short, human subjects become patterns. But this ideology has a troubled relationship to correlation and causation. Consider the following claim:

    We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.46

    This is the ideology of the data subject: the pattern in the noise. Whether or not we as humans can identify and test the mechanism by which such a relationship could be generated, at least some large-scale data analytics work asks us to trust the algorithm, to take the calculated results as a given.

    about the vehicle and its surroundings, including the locations of cars and pedestrians, precise GPS coordinates of the vehicle, and the vehicles speed and acceleration, not only represent important knowledge for path-finding by the vehicle itself, but new sources of potential revenue for the groups in position to collect them. Uber, which through its GPS-enabled ride-hiring applications still collects only a fraction of the data that would be available through a self-driving vehicle, has, as already noted, agreed to share its ride data with cities. Though this data will be anonymized, security researchers have shown time-after-time that anonymization and aggregation are no guarantee that certain aspects of the data cannot be tracked back to individuals.38

    These sorts of security and privacy issues are not unique to autonomous vehicles, nor even to networked vehicles. What Roger Clarke calls dataveillance is already possible: electronic tolls already allow for a measure of tracking.39 Networked traffic cameras are already being used to amass large databases of information about (non-networked) cars and their travel patterns by reading passing license plates.40 With sufficient coverage of cities and regions, these databases can capture the movements of large segments of the population, and therefore include potentially sensitive information about peoples movements. Despite this, their collection and use remains largely unregulated.41 California police are coming under criticism from civil rights groups about their mass collection of vehicle position information through police license plate cameras.42 Clearly, these types of contested data collection are already possible. But networked vehicles, with their arrays of sensors, provide more avenues of data collection, and therefore stand to increase our present-day problems with mass surveillance and personal privacy.

    Google has envisioned vehicles that can determine their number of occupants, and use facial-recognition or other biometric systems to identify them. According to the patent, these vehicles could prevent unauthorized persons from putting a child in a car, prevent convicted sex offenders from operating their vehicles within the legally-required distances of schools and playgrounds, or prevent a cars doors from being opened (even from the inside) by a child unless an authorized adult is present.43 These are only visions, not yet realities, but represent

  • 13TECHNOLOGIES OF KNOWING

    STAYTONsoftware platforms and a growing pool of data scientists.49 It is easier than ever for the untrained to find patterns in data, but the ease of finding patterns reveals that patterns have never been equal to truth. Everyone who takes basic statistics is taught that correlation is not causation, but applying that principle in practice is more difficult than remembering an aphorism. As machine-learning expert Michael Jordan points out, we need to use the appropriate statistical machinery to make sense of newly comprehensive data collection: it must become a science in process as well as in name, with deep concern for uncertainty.50

    Yet, driven by some combination of success stories, hype, and the theoretical validity of underlying statistical methods, the technology industry continues to invest in data collection and processing: data science techniques are being applied to improve services, and are popularized as ways to better understand ourselves and our society. Data-driven ventures like Uber have used their users data to fill blogs with insights about their customers.51 Changes in the advertising landscape have risen on this wave of data-driven investment: social ads and ads that draw on Facebook likes (something Facebook is focusing on with its new ad network52). A recent push toward geotargeted advertisements for mobile devices takes this idea one step further, adding contextual position information to allow ads to be targeted to businesses you are passing by, with the idea that this information will be more relevant.

    Creation of geotargeted ads to be shown inside automated vehicles might come as an outgrowth of this trend. When the human driver no longer has to concentrate on the road, her eyes and ears are free to concentrate on something else. While that is often presented as an opportunity to work, watch a movie, or in the most cynical of articles, view yet another Youtube cat video53 it cannot have escaped the notice of the advertising establishment that it presents new opportunities to show customized and personalized advertisements in a way that billboards cannot match. To organizations in a continual search to keep their advertising techniques fresh and novel, and to stand out among the noise of physical billboards and web banner advertisements, this would represent another new frontier.

    If data collection in autonomous vehicles is

    How do we understand this tendency?The core issue is not with the amount of

    data. Statistics has always had to contend with bad data: bad data make bad predictions. And having large amounts of data does not necessarily make that data any better.47 People make the same mistakes in understanding big and small data, but what the data science craze should show is how easy those mistakes are to make. Consider a simple but instructive example: we toss a coin and get heads 1000 times, and tails 1001 times. How do we interpret this behavior? Navely it would appear random, and nothing in this information could lead us to conclude otherwise. But focusing on the proportion alone doesnt expose that if our pattern was HTHTHTHTHT..., always alternating heads and tails, that behavior is anything but random: it is highly ordered. Nothing is fundamentally wrong with the mathematics involved, but the approach we applied was limited by our own vision, our model of the world and the question we were trying to ask. We could of course apply any number of techniques to make sense of the data gathered, but always subject to human subjectivity.48 And questions of associating cancer rates to hundreds of different lifestyle factors (especially ones collected from different sources, using a variety of techniques), or associating words to identify the topics of documents, are much more complex. The latter example depends on a further question of interpretation: it assumes the reality of a documents topic. Does everything have a singular topic that can be defined, even by a human? While these questions fall outside of the purview of statistical science, they are implicated at the very core of its operations.

    The key is that no amount of data can prove hypotheses; data can only be used to contradict them. This is the scientific method: the scientific process is built around disproving. However, few nonspecialists understand statistics well (and that, unfortunately, includes most policymakers), and the culture of data and quantification means that we are too easily gulled by attractive figures, suggestive of mathematical certainties. Big Data culture has increased the commodification of data, and increased the impetus for businesses and governments to squeeze insights from it, taking advantage of an increasing number of commercial

  • 14 SPRING 2016

    SENSING, SEEING, AND KNOWINGan array of photocells to close the visual feedback loop, allowing a light-guided automobile to stay on a specially designed track. As the article describes:

    With speeds, such as recently attained by the famous Sir Malcom Campbell, already approaching the point where human reflexes are too slow to insure safe control of the car, science has turned to the photo electric cell for a possible solution. A proposed driverless car involves the use of multiple electric eyes as the heart of its steering mechanism. A powerful beam of light directed at a large lens on the front of the car is concentrated on steel mirrors set at an angle in the trackbed. The reflections are caught by the electric eyes which convey the electrical impulses to a mechanical-electrical brain which keeps the speeding car on its course.60

    Such use should not be surprising since, as the article itself notes, German railway engineers had been applying photocells to automatically control the brakes on trains in Munich for several years, a conceptually similar but much simpler operation.61

    Though the photocell provides the computer with access to brightness information over time, it falls far short of the ability of human eyes to perceive detail and depth, identify shapes, and interpret expression and motion. Emulating these characteristics represents a key goal of artificial intelligence research. DARPAs 1983 Strategic Computing Initiative included image interpretation as one of its main focus areas.62 But it is only relatively recently that real-time video processing became possible for camera-based navigation on computer systems small enough to fit in a standard automobile.63 And machine vision problems, including object recognition and scene interpretation, continue to be difficult, even with increased processing power and new algorithms.

    As an engineering discipline, computer vision takes a decidedly practical and reductionist view of what it means to see. The goal is generally not to achieve creative interpretation or aesthetic valuation, but to differentiate free space from things a robot should not run into.64 But this so-called objective focus still encodes certain subjective judgments about objects (including people), behaviors, and

    allowed to occur as unfettered as it is on the web, we would be known by the digital traces of our physical behaviors, and the statistical inferences, correct or incorrect, that can be drawn out of them. As Nissenbaum makes clear in Privacy in Context, the key concern about being a data subject is not necessarily what is collected, or that any data is collected at all, but it is the context in which and purposes for which it is used. In the current infrastructure of data collection and use, it is generally impossible to specify or audit how and by whom collected data is used. Lacking access to the storage databases, we are cut off from our own representations, alienated from data we generate. Like the first alienation of the worker from the product of her labor, long decried by Marxist theory, this second alienation generates a further power disparity between the worker (in this case, the data generator) and the beneficiary of the labor (in this case, the data collector).54 Here, however, the worker is not compensated monetarily for her labor: as Christian Fuchs describes, she is thus infinitely exploited.55 From this data-focused perspective we are known as a collection of behaviors, represented by data points, held in databases to which we do not have access and therefore cannot fully comprehend. We are patterns to be sold and sold to.

    Shapes and Bodies

    Second, we will be shapes on an image, or depth information from a LIDAR scanner, interpreted by computer vision algorithms. While machines had been able to weave since the development of Vaucansons automated loom in 1747,56 and were performing a number of industrial tasks in factories by the 1950s57 (though still only in limited numbers, even by the 1970s58), none were able to interpret complex visual stimuli. Interpretation of the world around us is a task that seems particularly easy for human beings, but particularly difficult for machines. The invention of the photocell, early a tool for workplace monitoring and surveillance, provided a simple channel through which electrical systems could respond to the amount of light reaching them.59 Early vehicle automation technologies applied these photocells to feedback systems: the electric eye automobile, a concept presented in Modern Mechanix in April 1936, would have used

  • 15TECHNOLOGIES OF KNOWING

    STAYTONsensitive information about free space and obstacles. Shape-detection algorithms can then be used (in addition to vision-based data) to classify obstacles as different types of objects.71 But this apparently detailed knowledge is still only skin-deep: as Lee Gomes notes, the sensors cannot differentiate a rock from a crumpled newspaper, and will swerve to avoid both.72

    While much computer vision uses machine-learning algorithms to detect objects, engineers in automotive applications are justifiably reluctant to use machine-learning: as Gde Both has noted in his ethnographic research on developers of driverless cars in Europe,73 machine learning techniques are brittle and unpredictable:74 neither characteristic makes them suited for software that must be highly reliable and on which peoples lives literally depend. So mixes of manual and machine-learning approaches are used to do object detection. Though machine-learning can be a highly-effective technique, it is generally difficult or impossible to know what the system has actually learned and therefore how it will react in new and unknown situations.75 Pedestrian detection algorithms search for person-like shapes, where person-like is determined by, for example, processing thousands of images previously classified (by people) as being images of humans, so the system can learn the features that correlate with a person being in a particular region of an image. In a sense, the computer develops a concept of a person. So long as the right features appear in each new situation, this approach works; but what the computer has learned is essentially black-boxed, and resists introspection.

    These sensors and techniques impact how we are known by machine-vision systems. Based on the views of these sensors, objects in the environment are placed into categories by the vehicles computers: pedestrian, bicyclist, car, truck. Categories allow the system to make statistical predictions about likely types of behavior: according to one of Googles patent applications, bicyclists are likely to be more erratic than trucks, and should be treated accordingly.76 An obstacle detected as a person can be expected to move, perhaps erratically, while an object that is considered inanimate will not. The DARPA Urban Challenge crash, the first crash between two autonomous cars, provides an

    intent. And while computer vision is having success with object detection, there is a wide variety of human knowledge about objects and scenes that is missing in current computer models.65

    Though vision has not always been the sensory mode that dominated autonomous vehicle researchearlier research focused on radio control or electromagnetic tracksvision is a particularly tempting sense to use, as it is integral to how humans drive. In an attempt to build autonomous vehicles that can operate without infrastructural changes, research has moved away from tracks and cables toward vision-guided systems. New approaches were pioneered by Ernst Dickmanns at University Bundswerhr in Munich, with the vision-guided VaMoRs van, and continued via the EUREKA PROMETHEUS project in 1987, in which Dickmanns and Daimler-Benz built cars guided by analog video cameras.66 Like the earlier VITA project by Daimler that used an analog video-camera signal processed through a framegrabber, these cars digitized analog video at relatively low resolutions. The features the systems searched for, including lane markings and other cars, are geometrically distinct and visible even in small images.67

    Todays computer vision systems for autonomous vehicles focus also on pedestrian detection. Feature detection attempts to locate other inhabitants of the environment via the unique shapes of pedestrians and bicyclists. Vision-guided systems, now using digital video cameras and off-the-shelf consumer hardware, have the benefit of being inexpensive and insensitive to interference from other nearby devices (unlike sonar, for example, which becomes problematic in crowded situations68). Some commercial systems, such as that developed for Mercedes-Benzs self-driving S-class, which is slowly finding its way into consumer vehicles, are primarily guided based on such visual sensors.69 To these sensors, recent research has added roof-mounted LIDAR arrays. LIDAR, short for Light Detection and Ranging, is effectively a depth sensor, which is applied in vehicles to scan the environment with a rotating array of laser beams to create a detailed 360-degree representation of objects and their distances.70 This technology solves some of the difficulties of image interpretation by default, as it can provide highly-

  • 16 SPRING 2016

    SENSING, SEEING, AND KNOWINGstrollers? Or crawling children versus dogs? The ability of these algorithms to be objective is limited by their categories, which are essentially limited by the foresight and attention of their designers. Even the most complex computer vision systems currently in existence have much more limited concepts of the world than we do, based on much more limited experience with it (i.e. still images only).80 Who we are, visually, to computers is a social and cultural question as much as it is a technical one.

    Maps, Laws, and Customs

    Third, we will be known through objects on maps of the world, maps that we must create in order for automated systems to function. In order to drive with us, autonomous systems will have to understand, for at least a practical sense of understanding, traffic rules and their accompanying signs, signals, lanes, and customs. This is, at its core, a highly complex problem, because human understanding is built through years of experience. It is through existing as a human being in a particular cultural context that we know to drive on roads but not on sidewalks, and how to tell the difference.

    Autonomy research seeks to create robotic systems that can function on their own, but as we have already seen these systems are very often connected to other information networks. The vehicles in the DARPA Grand Challenge did not navigate on their own: they used GPS to follow a path laid out for them in advance, using their autonomy only to avoid obstacles like rocks and ruts.81 And though successful road tests have been accomplished without navigational assistance, using only visual stimuli (such as the EUREKA PROMETHEUS project previously mentioned), modern systems are tending to use more external stimuli, rather than less, in an attempt to increase safety. Even as advanced as it is, Googles autonomous vehicle technology requires hyper-detailed 3D maps in order to operate properly on public roadways.82 These maps are generated by vehicles outfitted with special sensor arrays, like the LIDAR Google uses for their autonomous vehicles, which drive a route and collect data which can be used to reconstruct the model.83

    Pre-made maps are used so the vehicle knows where stoplights, signs, and curbs are, reducing

    important lesson on the vagaries of object detection: the classification threshold between moving and stationary, set too high, allowed one vehicle to interpret the other as stationary, leaving no room for unexpected behavior.77 Though the exact details are specific to a certain technical situation, the overall lesson is general: object misidentification comes with potentially serious costs.

    In visual dramatizations and debugging, detected features are shown with boxes around them, following humans as they move through the environment.78 The new technologies of vehicle automation thereby produce through their operation new forms of evidence, which can be presented through electronic information media. Because autonomous cars seeputatively as we seetheir sight can be leveraged as visual evidence. Computer vision systems that identify pedestrians can be shown to do so, via the detection boxes that act as diagnostic tools for researchers and direct representations of internal system information. LIDAR readings get visualized as 3D environmentsshowing objects in creepy topological relief like something out of a 1980s vision of cyberspaceand present the same opportunities for transparent visual proof. Three-dimensional shapes, standing in stark relief against the background, bear witness to the sensory operation of the vehicle. These shapes too are demarcated by boxes, which represent their computational transformation from information into an object or artifact of interest. Friedrich Kittler might make much of this parallel with military technologies: the pedestrian as target, in the sights of the machine.79

    In this picture we are known through our visual features, our body, our prominence against the background: as human shape, as relief and shadow. But human subjectivity is continually important in the computer vision equation. How those who do not fit the norms of human appearance will be interpreted is an open question. While a person in a wheelchair would easily be identified as an obstacle via a LIDAR array, how would she be classified by an object-recognition algorithm? As a pedestrian? A person-in-a-wheelchair? Or an inanimate object? This depends in a large part on whether designers saw fit to include labeled images when training their classifier, and whether they included another internal category for wheelchairs. What about

  • 17TECHNOLOGIES OF KNOWING

    STAYTONcorrelate those with the lights it detects with its cameras to know which ones apply to its lane.89 If it cannot determine the lights color, it must revert to minimal-risk behavior (in this case, treating the light as a yellow and moving through slowly and cautiously while waiting for more information90). With the right infrastructure in place, this sort of detection would be unnecessary, or would be a back-up system only. Radio-control systems at each light could coordinate traffic, and send out wireless signals, synchronized with the lights, to provide the appropriate go or stop information in another form. Though invisible to us, such data would be far more easily interpreted by computers. But while radio control of vehicles has been proposed since the 1950s, the high cost of changing all infrastructure to match new standards ensures progress is gradual, if it happens at all. Traffic lights have been designed for people, not computers, and even date to the days when horses were still commonly used for transportation.91 But because installing special communications systems at each light would be a great infrastructural expense Google has no control over, its vehicles cannot count on this information, and must try to perform the same visual tasks we do despite that those tasks make little sense for a vehicle that can communicate arbitrary information wirelessly.

    Though prior knowledge, stored in maps and databases, may make driving safer and easier in most conditions, it may degrade safety if speed limits have been recently changed; the traffic flow in an area may be very different if a police officer has set out cones and is directing traffic through difficult-to-interpret hand signals and glances. GPS maker TomTom plans a comprehensive mapping effort, using a fleet of specialized cars to build an up-to-date map database that autonomous cars can use to navigate.92 But the necessary level of continual mapping is a massive task if the vehicles must be usable everywhere. More likely, certain areas will be mapped and restricted, or separate divided public rapid transit systems will operate on roads that can be carefully monitored.93 While Mountain View, California may be mapped early, rural West Virginia or Northern Maine may not be mapped as soon or as frequently. Inequalities may be increased if routes frequented by upper-middle class professional commuters, most likely to own new autonomous

    the computational load on the machine in the crowded visual landscape of driving, and allowing it to focus on elements of the environment that are changing rather than those likely to be static.84 Prior knowledge of speed limits should make the cars behavior more reliable and predictable in all conditions, even if speed limit signs are missing or obscuredconsider how often human drivers, when faced with an absence of signs, base their behaviors on supposition or prior knowledge. When Googles car was certified for testing in Nevada, Google was allowed to pre-select the route the car would take, so that they could build the comprehensive model the system requires beforehand.85 The system would likely not have been capable of passing a test in which the examiner could have added detours on the fly. And though Google claims to have driven more than 700,000 miles with their cars, those are not 700,000 unique miles. A limited, thoroughly pre-mapped route has been driven many times to achieve those numbers.86

    Mapping brings in its own historical ideologies, of accuracy, comprehensiveness, and stasis. Mapping requires the world to largely remain as it is, and claims a unique capability to represent the real, objectively and diagrammatically. But the required maps expose certain frailties in these devices: they must know about speed limits, about traffic lights, about rules of the road that were never designed for autonomous systems. These devices must be molded to us, both to our caprices and to our longstanding, ingrained laws and habits. They must include historical knowledge, rooted in the legal and social histories of roadways, which may differ between cities and states, and certainly between countries across the world. The world has not been built with machines in mind.

    Consider Googles patent for traffic light detection, which when carefully considered exposes the strangeness within current approaches to autonomous vehicles.87 Googles self-driving car is a complex computational device that possesses some capabilities far beyond those of humans, such as the ability to transmit arbitrary data across wireless links. And yet, this device is relegated to emulating human capabilities: detecting colored blobs on a camera image and deciding its behavior based on that information.88 To do so it must know the expected positions of the lights, and

  • 18 SPRING 2016

    SENSING, SEEING, AND KNOWINGbeat Gary Kasparov, chess ceased to be the standard by which intelligence could be judged, precisely because it had been achieved. Real intelligence had to lie elsewhere: for example, in the game Go, mastery of which has continued to elude machines.97

    Nevertheless, machines manage to do things that seem intelligent. So though these terms are heuristics for understanding the observed behaviors of machines, they slip slowly over time from self-conscious scare-quoted use into casually accepted statements. While automatic translation may seem intelligent, or a system that can define toile as star may seem to possess knowledge, this intelligence or knowledge is perhaps very different than our own. A deep epistemological question presents itself: how do we know, and how do machines know? Many AI systems operate via statistical pattern recognition, so we may ask whether we believe human intelligence is also merely pattern recognition: does a system that can associate star with its definition really know what a star is? Is mere linguistic association sufficient for knowledge?

    H. R. Ekbia98 and others remind us that we should be skeptical of the applications of these terms to computational processes. John Searle, a critic of AI famous for the thought experiment of the Chinese room, argues that symbol processing and pattern recognition alone is not intelligence, though from the outside the results may appear to be intelligent.99 Though his arguments are not conclusive,100 his caution about ascribing overly ambitious human ideas to computational processes is warranted. Even Pamela McCorduck, a colleague of several notable AI researchers and a believer in the field in general, hedges on how intelligent some of the programs she discusses in Machines Who Think actually are. Ornstein, Smith, and Suchman, in their 1984 article Strategic Computing, warn of the difference between domain capabilities and common sense, and suggest that unwarranted optimism and a particular funding climate (issues also present today) push researchers to mask the shortcomings of AI with semantic shifts.101 We alter the definitions of knowledge and understanding to fit what our machines can do, and these claims, taken literally, give rise to unrealistic confidence in the power of the technology.102 The two-way process of linguistic

    cars, are mapped first, while roads around low-income communities are ignored. Local customs and behaviors differ, and even if maps are available, the same vehicle programming may not work for Los Angeles, Boston, and the rural Midwest, let alone Singapore, Mumbai or Cairo. The map, for all of its objective standardization, still represents real places subject to cultural histories and vulnerable to socio-economic dynamics.

    Despite valid criticism, Langdon Winners story of Robert Mosess New York bridgespurportedly designed to purposefully exclude buses, and therefore low-income Black citizens, from using the routesremains an important parable about the anticipated or unanticipated effects of urban planning on equality.94 Cities and roadways have developed gradually through the actions of millions of people over hundreds of years, in ways that are marked by race, class, and local custom. That human diversity cannot be erased simply through the use of one uniform, digital representation. And new technological developments will not necessarily have an egalitarian effect. From this geographical lens, we are known through our objects, as makers of a space that was never meant for our algorithmic chauffeurs, which have to be designed around us in pathological ways, and which, far from being an inherent leveler, present the possibility that transportation will be even more class-marked.

    Can Machines Know Anything?

    But we will not truly be known at all, at least not in the near future. Though computer science and philosophies of AI have been using intelligence, knowledge, and understanding, among other words, to talk about computers since the beginning of the field, these uses should not be taken at face value. Intelligence is slippery, and its definition is not constant over time. It is difficult to define intelligence in ourselves95 and yet another thing to define it in relation to other entities. Weaving was once considered to be a peculiarly human capability, a sign of an advanced, intelligent mind.96 But after Vaucansons loom allowed mechanical devices to weave seemingly on their own, this capacity was no longer seen as uniquely human, and was no longer a marker of intelligence. The same process occurred with chess in the 1990s. When IBMs Deep Blue

  • 19TECHNOLOGIES OF KNOWING

    STAYTONheuristics and statistical predictions. Each decision made represents a weighing of risks, and a potential for real harms, and therefore has unavoidable ethical dimensions. Whether robots are explicitly given an ethical calculus is beside the point, as it would be only a numerical representation, an attempt to quantify human worth such that a computer could understand and make decisions based upon it. Current technology forbids human-like ethics in machines. In this picture we are not known, because we cannot be. We can be only information; risks that are dealt with by rules made from lines of code.

    Knowing Machines

    In the other direction then, how should we know autonomous vehicles? As we have seen, though they are new types of objects that present new features and challenges, they are, like any other artifact, clearly enmeshed in their history. Their legal, political, and social dimensions are bound up in the history of transportation, in the construction of our roadways, in the principles that have driven the development of automata, automation, autopilot.

    But while these systems cannot know us, since they cannot know anything, it is important to recognize that decisions are not really being made by amoral vehicles. Not every device behavior can be predicted, and it would be foolish to place full responsibility on the programmers: there is real autonomy in devices, in that they may do things we do not want. But though all devices have bugs and will be unpredictable in certain circumstances, the first place to look for ethics, for an implicit ethical calculus, is the human beings that do the design. Ultimately, if we want to care about how systems operate, we must look at how they are programmed.

    Autonomy displaces decisions about vehicle operations in space and time, to the depths of corporate headquarters where we cannot see. In human crashes, we know when and where decisions happened. We can point, at least in theory, to a moment of distraction, a failure of perception, an impairment of focus. But autonomy removes this easy surety. Autonomous cars will likely be closed-source, protected intellectual property, such that any attempt to introspect their workings can be blocked.105 We may know them only by their external

    and technological changethat intelligence gets applied to describe whatever researchers manage to achieve, while real intelligence retreats away from each computational advanceleaves these terms poorly defined.

    And yet the ideology of artificial intelligence, the focus of the field itself, is bound up in the idea of intelligent machines that can be said to know. Does a computerized car know the value of a person? We might wish that it would. AI researcher Doug Lenat said in 1997 that:

    Before we let robotic chauffeurs drive around our streets, Id want the automated driver to have a general common sense about the value of a cat versus a child versus a car bumper, about children chasing balls into the streets, about young dogs being more likely to dart in front of cars than old dogs (which, in turn, are more likely to bolt than elm trees are), about death being a very undesirable thing103

    This is a difficult knowledge and perception problem. But even more, it is an issue of selfhood, embodiment, even sentience. While cats, children and bumpers can be identified as objects, and children chasing balls into the streets can be identified as patterns, a computer programmed to respond to these stimuli may respond correctly without knowing anything. While a machine can be programmed to avoid running into people, can it have any understanding of death? Can it be programmed to feel guilt? Does it need to?

    But while the creation of truly intelligent, thinking, knowing, understanding systems may well be possible in the long term, these will not be the autonomous vehicles that we see in the near future. The focus of development in the industry is functionalist: building systems that work. The dream of general-purpose human-like intelligence goes on elsewhere: Googles project groups (Google Chauffeur and DeepMind) are geographically and organizationally separated, one in California and the other in London.104 The autonomous cars we may see will not be emotional, empathetic, or capable of moral and ethical judgments.

    Nonetheless, their behaviors will instantiate such judgments, based on human-authored

  • 20 SPRING 2016

    SENSING, SEEING, AND KNOWINGorganizations operating expensive technology in a high-risk environment want humans to be responsible for the well-being of the equipment.

    On another front, major aircraft manufacturers Boeing and Airbus are now stepping back from complete automation while increasing computerization: a history of accidents including Air France Flight 447, which crashed due to a difficult hand-over of control from computer system to pilots, speaks to the problems facing highly automated systems when automation has to shut down.111 The response has been to more fully involve the pilot in aircraft operations at all times, and to dynamically adjust pilot workload.112

    Like aircraft, automated cars will carry people, who by their bodily presence ultimately entrust their lives to the system. And like Mars rovers they will be expensive pieces of capital equipment, of whose positions and well-being we would like to be assured. Full autonomy without monitoring, the kind of autonomy envisioned by those who suggest one might go to sleep while the vehicle is operating, contravenes both of these expectations. Though making hybrid human-machine systems can be a more difficult engineering problem than taking the human out entirely, it has benefits in terms of resilience, capability, and adapting to new situations.113 And hybrid strategies currently prevail not primarily for technical reasons, but for deep social, cultural, and human ones, which are unlikely to change as quickly as technology.

    Real-life operations of existing systems are more complicated than simple human/machine dichotomies can capture. What tasks we allot to a machine may radically change the sort of information it needs, the sort of knowing we ask it to engage in, and the sort of knowing that we need it to be capable of in order for us to feel comfortable with its operations. And through hybrid, as opposed to fully automated, systems, drivers would be called to interact with, and to know, our vehicles more fully, and to collaborate with them instead of ceding our own agency. The paths that research and commercialization take stand to alter knowing in both directions. Discomfort about autonomy (or concern about machines and knowing) is not tantamount to ludditism. It can and should be used to challenge easy assumptions, in the press and elsewhere, about the necessary shape of the future.

    signals, not in the details of their operations (or at least not without lengthy and protracted litigation to expose those details). We will be subjects of moral decisions, made behind closed doors, enacted by systems incapable of morality.

    This presents a potential legal, philosophical, and cultural crisis. Current attempts by the NHTSA, in their preliminary policy document, fail to cut to the heart of autonomy, despite their laudable efforts to articulate basic principles for autonomous vehicle testing. Their policy outlines functional requirements and standard test regimes, but does not seek to look into the labs where devices are designed and built, and is uncritical about probable futures of autonomous vehicles.106 But history shows that laws and expectations change, and can be changed. Automated vehicles are not a panacea, and public voices should enter into the discussion; the public should be able to see inside, to know the systems intended to keep them safe.107

    Lessons from Autonomy History

    As I have shown, autonomous vehicles will be media technologies regardless of the precise details of their engineering, but it would be remiss to conclude without considering historical precedents. Systems that were envisioned as autonomousparticularly technologies for exploration, whether manned or unmannedoften lose much of that autonomy in practice, supplemented with human-controlled or mixed approaches, or relegated to back-up systems.

    When the Apollo project began, some engineers questioned the need for a complicated human-machine interface: the pilot would instead be a missile rider, responsible for pushing an abort button if the mission went awry but otherwise uninvolved in the process of flying.108 But ultimately, a highly automatic flight computer was supplemented with a human interface festooned with buttons, toggles, switches, and controls of all kinds. And though the lunar lander had fully capable auto-land functionality, pilots turned it off and landed the vehicle manually.109 More recently, the Spirit and Opportunity rovers, far from being autonomous robots, are instead more like remote tools, extending human reach but largely carrying out our commands under supervision.110 Human common sense is integral to rover missions, and

  • 21TECHNOLOGIES OF KNOWING

    STAYTONErik Stayton is a technologist and technology scholar interested in shaping the future of human relationships to technology by studying and critiquing their past, their present, and conventionally accepted visions of their future. He received his dual-degree Sc.B. from Brown University in physics and English literature, with an honors thesis in gravitational lensing. After several years as a designer, programmer, and educational writer, he came to MIT Comparative Media Studies where he completed a masters thesis on automated vehicle technologies and the often unacknowledged complexity and hybridity of automated systems. That work (Driverless Dreams: Technological Narratives and the Shape of the Automated Car) holds that only an eye toward the design of the whole system---humans and machines in the context of broader social goals---will reliably produce vehicles that live up to our driverless dreams.Erikis now a Ph.D. candidate at MIT HASTS, researching the social implications of AI and automation technologies.

    Notes

    1. Enthusiasts previously installed radios into their own vehicles, but at least by 1928, with the founding of Motorola, the car radio had become a consumer product.2. See for example the electronic instrument clusters of the Lexus LFA and the 2014 Lexus IS-F, which emulate physical gauges, or the digital dash display of the Toyota Prius.3. For example, Fords MyFord Touch: much like for other electronic systems, an examination of the Ford website shows this technology is being sold on its customization, convenience (e.g. voice control), and safety features.4. For example, Honda calls this feature, as implemented on the Accord Hybrid, LaneWatch.5. Melissa Riofrio, Mercedes-Benzs F015 Concept is a Self-Driving, Hydrogen-Powered Living Room, PCWorld, January 6, 2015, accessed January 28, 2014, http://www.pcworld.com/article/2865478/mercedes-benzs-f015-concept-is-a-self-driving-hydrogen-powered-living-room.html.6. Joe Simpson, Driver-Less Car Design: Are We Sleep-Walking Into the Future? Pocket-Lint, January 12, 2015, accessed January 28, 2015, http://www.pocket-lint.com/news/132343-driver-less-car-design-are-we-sleep-walking-into-the-future.7. See a brief summary by the Information Commissioners Office, https://iconewsblog.files.wordpress.com/2014/05/key-points-of-cjeu-case.pdf.8. Specifically, whilst it is true that the data subjects rights also override, as a general rule, that interest of internet users, this balance may however depend, in specific cases, on the nature of the information in question and its sensitivity for the data subjects private life and on the interest of the public in having that information. Court of Justice of the European Union. Press Release No 70/14. Luxembourg, May 13, 2014, http://curia.europa.eu/jcms/upload/docs/application/pdf/2014-05/cp140070en.pdf.9. Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, in Seventh International World-Wide Web Conference (WWW 1998), April 14-18, 1998, Brisbane, Australia, via Stanford InfoLab Publication Server, http://ilpubs.stanford.edu:8090/361/10. Sergey Brin, Extracting Patterns and Relations from the World Wide Web, in The World Wide Web and Databases: International Workshop WebDB98, Valencia, Spain, March 27-28 1998. Selected Papers, ed. Paolo Atzeni, et al. (Berlin: Springer, 1999), 172-183.11. Electronic Frontier Foundation, Do Not Track, accessed January 28, 2014, https://www.eff.org/issues/do-not-track.12. This quote is attributed to Jonathan Zittrain, but his own blog attempts to determine its provenance and locates it elsewhere.13. Xeni Jardin, In an Effort to Suck up to Local Governments, Uber Plans to Share Your Ride Data, BoingBoing, January 15, 2015, accessed January 28, 2015, http://boingboing.net/2015/01/15/in-an-effort-to-suck-up-to-loc.html.14. See Helen F. Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life, (Stanford, CA: Stanford University Press, 2010).15. Nissenbaum, 2.16. Nissenbaum, 3.17. See Donald MacKenzie, Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance, (Cambridge, MA: MIT Press, 1993). 18. Will Knight, Driverless Cars are Further Away Than You Think, MIT Technology Review, October 22, 2013, accessed January 28, 2015, http://www.technologyreview.com/featuredstory/520431/driverless-cars-are-further-away-than-you-think/.19. An autonomous vehicle researcher at MIT quoted prices in the range of $70,000 to $100,000 per device for the GPS alone, while noting that those would of course come down with greater production volume.20. See for example Tyler Becker, Google Says Driverless Cars Will Join the Roads in Less than 5 Years, Social Media Week, January 18, 2015, accessed January 28, 2015, http://socialmediaweek.org/blog/2015/01/google-says-driverless-cars-will-join-roads-less-5-years/.21. Holly Ellyatt, Driverless Cars Coming, But Consumers Not Ready Yet, CNBC, September 25, 2013, accessed October 17, 2014, http://www.cnbc.com/id/101061071.22. OnStar. Home. Accessed October 19, 2014, https://www.onstar.com/us/en/home.html.23. Jameson Wetmore, Driving the Dream: The History and Motivations Behind 60 Years of Automated Highway Systems in

  • 22 SPRING 2016

    SENSING, SEEING, AND KNOWINGAmerica, Automotive History Review (Summer 2003, via PDF from Consortium for Science, Policy, & Outcomes at ASU): 11.24. Wetmore, 7.25. Ibid.26. Wetmore, 15.27. Wetmore, 9.28. Ibid.29. National Highway Traffic Safety Administration, Preliminary Statement of Policy Concerning Automated Vehicles, May 30, 2013, accessed October 12, 2014, http://www.nhtsa.gov/staticfiles/rulemaking/pdf/Automated_Vehicles_Policy.pdf.30. Nissenbaum, 26-27.31. Commercial networked devices routinely lack basic security measures, see for example Kim Zetter, Hackers Can Mess With Traffic Lights to Jam Roads and Reroute Cars, Wired, April 30, 2014, accessed January 28, 2015, http://www.wired.com/2014/04/traffic-lights-hacking/. And for a large number of other systems, security is defeated by being out-of-date or by using default passwords that were never changed.32. Bryant Walker Smith, A Legal Perspective on Three Misconceptions in Vehicle Automation, in Road Vehicle Automation Lecture Notes in Mobility. Spring 2014. Via SSRN: http://ssrn.com/abstract=2459164.33. Ryan Chin, interview with the author, August 20, 2014.34. William J. Mitchell, Reinventing the Automobile: Personal Urban Mobility for the 21st Century (Cambridge, MA: MIT Press, 2010), 154.35. Brandon Griggs, Googles new self-driving car has no steering wheel or brake, CNN, May 28, 2014, accessed January 25, 2015, http://www.cnn.com/2014/05/28/tech/innovation/google-self-driving-car/index.html.36. Human factors research shows that diagnostics and controls are key to trust in automation. David Mindell, discussion with the author, December 3, 2014.37. Jiajun Zhu, et al., System and method for predicting behaviors of detected objects, US Patent Application US2012/0083960 A1, filed October 3, 2011.38. See for example Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, UCLA Law Review, Vol. 57 (August 13, 2009): 1701, via SSRN: http://ssrn.com/abstract=1450006, or Seth Schoens cogent overview at the EFF (https://www.eff.org/deeplinks/2009/09/what-information-personally-identifiable).39. Nissenbaum, 25.40. Nissenbaum, 26.41. Groups such as the EFF and ACLU are attempting to do something about this, see Nadi Kayyali, EFF Submits Letter Opposing Oaklands Domain Awareness Center, Electronic Frontier Foundation, February 18, 2014, accessed January 28, 2015, https://www.eff.org/deeplinks/2014/02/eff-submits-letter-opposing-oaklands-domain-awareness-center.42. For some insight into the recorded information, see Jeremy Gillula and Dave Maass, What You Can Learn from Oaklands Raw ALPR Data, Electronic Frontier Foundation, January 21, 2015, accessed January 28, 2015, https://www.eff.org/deeplinks/2015/01/what-we-learned-oakland-raw-alpr-data. 43. Zhu, 7.44. Nissenbaum, 21.45. This is represented by descriptions like more data allowing ever-narrower segmentation of customers and therefore much more precisely tailored products or services, see James Manyika et al., Big Data: The Next Frontier for Innovation, Competition, and Productivity, McKinsey Global Institute, May 2011, accessed January 28, 2015, http://www.mckinsey.com/insights/business_technology/big_data_the_next_frontier_for_innovation.46. Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, Wired, June 23, 2008, accessed October 27, 2014, http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory.47. Though the core of the article seems to miss the point, Nassim Talebs short piece Beware the Big Errors of Big Data, Wired, February 8, 2013, accessed October 25, 2014, http://www.wired.com/2013/02/big-data-means-big-errors-people/ at least brushes this particular issue: take enough data and you will find patterns. The reality of those patterns depends in large part on whether what you collected was accurate.48. Closely linked is the question of the sociology of algorithms, technical products of human effort that impose a certain type of truth, see Tarleton Gillespie, Chapter 9: The Relevance of Algorithms, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski and Kirsten Foot (Cambridge, MA: MIT Press, 2014), 167-193.49. Neil Richands and Jonathan King document some of these developments in BIG DATA ETHICS, Wake Forest Law Review 49, no. 2 (Summer 2014): 393-432.50. See Lee Gomes, Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts, IEEE Spectrum, October 20, 2014, accessed October 25, 2014, http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts, as well as Michael Jordans response: Big Data, Hype, the Media, and Other Provocative Words to Put in a Title, UC Berkeley AmpLab, October 22, 2014, accessed October 25, 2014, https://amplab.cs.berkeley.edu/2014/10/22/big-data-hype-the-media-and-other-provocative-words-to-put-in-a-title/.51. See Uber, #uberdata | Uber Blog, accessed January 29, 2015, http://blog.uber.com/uberdata/.52. Quinten Plummer, Facebook Launches New Ad Network for Bigger Marketing Audience, Tech Times, October 7, 2014, accessed January 28, 2015, http://www.techtimes.com/articles/17411/20141007/facebook-launches-new-ad-network-boasting-a-

  • 23TECHNOLOGIES OF KNOWING

    STAYTONlocal-twist.htm.53. Simpson, Driver-Less Car Design.54. Steffen Krger and Jacob Johanssen, Alienation and Digital LabourA Depth-Hermeneutic Inquiry Into Online Commodification and the Unconscious, TripleC (Cognition, Communication, Co-Operation): Open Access Journal for a Global Sustainable Information Society, Vol. 12 Issue 2 (2014): 633-634.55. Christian Fuchs, cited in Krger and Johanssen, 636.56. Jessica Riskin, The Defecating Duck, or, the Ambiguous Origins of Artificial Life, Critical Inquiry, Vol. 29, No. 4 (Summer 2003): 625.57. Norbert Wiener, The Machine as Threat and Promise, St. Louis Post-Dispatch, December 13, 1953. From Norbert Wiener Papers, MC 22, Box 30C, Folder 732, Institute Archives and Special Collections, MIT Libraries, Cambridge, Massachusetts.58. David E. Nye, Americas Assembly Line, (Cambridge, MA: MIT Press, 2013), 159.59. David E. Nye, Electrifying America: Social Meanings of a New Technology, 1880-1940, (Cambridge, MA: MIT Press, 1990), 361.60. Light Beams Steer Super Racing Cars, Modern Mechanix (April, 1936): 71, accessed December 15, 2014, http://blog.modernmechanix.com/light-beams-steer-super-racing-cars/.61. Light Beams Steer Super Racing Cars.62. The project, though often considered a failure, spurred significant development and investment in AI while it lasted. See Pamela McCorduck, Machines Who Think [electronic resource]: A Personal Inquiry Into the History and Prospects of Artificial Intelligence, (Natick, MA: A.K. Peters, 2004), accessed January 28, 2015 via Books24x7, Inc.63. See Dickmanns, E. D., et al. 1994. The Seeing Passenger Car VaMoRs-P, Proceedings of the Intelligent Vehicles 94 Symposium:. 68-73. Via IEEE Explore.64. SRIs Shakey robot, for example, made navigation plans via rudimentary computer vision tools, specifically sonar range-finding.65. Gomes, Machine-Learning.66. See Berthold Ulmer, VITA II Active Collision Avoidance in Real Traffic, Proceedings of the Intelligent Vehicles 94 Symposium: 1-6. Via IEEE Explore.67. See Berthold Ulmer, VITA An Autonomous Road Vehicle (ARV) for Collision Avoidance in Traffic, Proceedings of the Intelligent Vehicles 94 Symposium: 36-41. Via IEEE Explore.68. John Leonard, discussion with the author, December 3, 2014.69. Jrgen Dickmann, et al., Making Bertha: Radar is the Key to Mercedes-Benzs Robotic Car, IEEE Spectrum (August 2014): 44-49.70. For LIDAR basics, see Erik Gregersen, Lidar, Encyclopaedia Britannica, September 2014, accessed January 28, 2015.71. See for example the object detection work done by Y. Fukuda et al, Target Object Classification Based on a Fusion of LIDAR Range and Intensity Data, Inspec ( January 1, 2014), via EBSCOhost, accessed January 28, 2015.72. Lee Gomes, Driving in Circles: The Autonomous Google Car May Never Actually Happen, Slate, October 21, 2014, accessed, October 27, 2014, http://www.slate.com/articles/technology/technology/2014/10/google_self_driving_car_it_may_never_actually_happen.single.html.73. Gde Both, What Drives Research in Self-Driving Cars? (Part 1: Two Major Events), CASTAC Blog, April 1, 2014, accessed October 27, 2014, http://blog.castac.org/2014/04/what-drives-research-in-self-driving-cars-part-1-two-major-events/.74. Gde Both, What Drives Research in Self-Driving Cars? (Part 2: Surprisingly, not Machine Learning), CASTAC Blog, April 3, 2014, accessed October 27, 2014, http://blog.castac.org/2014/04/what-drives-research-in-self-driving-cars-part-2-surprisingly-not-machine-learning/.75. Gde Both, What Drives . . . Part 2.76. Zhu, 5.77. See Luke Fletcher et al., The MIT Cornell Collision and Why It Happened, The DARPA Urban Challenge (Springer Berlin /Heidelberg, 2009): 509-548.78. See for example Volvos ad for the S60s pedestrian detection capabilities: https://www.youtube.com/watch?v=UdTQfegCxF8, accessed January 28, 2015.79. Consider for example his connection of the chronophotographic gun, the precursor of the film camera, to military uses and the military-industrial complex. See Friedrich Kittler, Gramophone, Film, Typewriter (Stanford, CA: Stanford University Press, 1999). Given DARPAs historical interest in machine vision, similar connections seem warranted.80. See John Markoff, Researchers Announce Advance in Image-Recognition Software, New York Times, November 17, 2014, accessed January 28, 2014, http://www.nytimes.com/2014/11/18/science/researchers-announce-breakthrough-in-content-recognition-software.html.81. John Leonard, discussion with the author, December 3, 2014.82. Lee Gomes, Hidden Obstacles for Googles Self-Driving Cars, MIT Technology Review (August 28, 2014), accessed October 27, 2014, http://www.technologyreview.com/news/530276/hidden-obstacles-for-googles-self-driving-cars/.83. Nhai Cao (Global Product Line Manager at TomTom), presentation at The Road Ahead Forum on Future Cities 2014, Cambridge, MA, MIT, November 21, 2014.84. Gomes, Hidden Obstacles.85. Mark Harris, How Googles Autonomous Car Passed the First U.S. State Self-Driving Test, IEEE Spectrum (September 10, 2014) accessed October 25, 2014, http://spectrum.ieee.org/transportation/advanced-cars/how-googles-autonomous-car-passed-the-first-us-state-selfdriving-test/?utm_source=techalert&utm_medium=email&utm_campaign=091114.

  • 24 SPRING 2016

    SENSING, SEEING, AND KNOWING86. Gomes, Driving in Circles.87. Nathaniel Fairfield, Christopher Paul Urmson, and Sebastian Thrun, Traffic Signal Mapping and Detection, US Patent Application US 2014/0016826 A1, filed September 18, 2013.88. Fairfield et al., 1.89. Fairfield et al., 9.90. Fairfield et al, 10.91. The original invention is generally credited to John Knight in 1868.92. Cao, The Road Ahead Forum.93. This point came up in a discussion with a scientist with experience with a UK-based firm working on public rapid transit, or PRT, systems.94. See Langdon Winner, Do artifacts have politics? Daedalus, Vol 109 (Winter 1980): 121-136.95. See for example Earl Hunt, Defining Intelligence..Step 2: How Should We Define Intelligence?, Psychology Today (May 30, 2011) accessed January 28, 2014, https://www.psychologytoday.com/blog/exploring-intelligence/201105/definingintelligencestep-2.96. Riskin, 627.97. When IBMs Deep Blue beat Gary Kasparov in 1997, most Artificial Intelligence researchersand commentators decided that chess playing did not require intelligence after all and declared a new standard, the ability to play Go, Riskin, 623.98. Hamid Reza Ekbia, Artificial Dreams: The Quest for Non-Biological Intelligence (Cambridge: Cambridge University Press, 2008).99. See John R. Searle, Minds, Brains, and Programs, in The Philosophy of Artificial Intelligence, ed. Margaret Boden (Oxford: Oxford University Press, 1990), 67-87.100. The Chinese room example is as much criticized as it is referenced, and I myself take issue with it in details.101. Servero M. Ornstein, Brian C. Smith and Lucy A. Suchman, Strategic computing, Bulletin of Atomic Scientists (December 1984): 14.102. Ornstein et al., 15.103. Ekbia, 122.104. As of September 2014, the DeepMind group was not in contact with Chauffeur, according to a DeepMind employee.105. Digital rights management and anti-circumvention laws are abused routinely to lock out consumers and even muzzle security researchers. For a recent example: Parker Higgins, DRM in Cars Will Drive Consumers Crazy, Electronic Frontier Foundation (November 13, 2013), accessed October 17, 2014, https://www.eff.org/deeplinks/2013/11/drm-cars-will-drive-consumers-crazy. 106. The NHTSAs taxonomy of vehicle automation progresses hierarchically from no automation, to one control system automated, to both control systems automated (where a control system is either the steering wheel or the accelerator/brake). This leaves no room for systems in which both functions are partially automated. And it fails to consider much existing or potential automation of features other than the steering system or the accelerator. In this respect the Society of Automotive Engineers report J3016 (Surface Vehicle Information Report, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, January 2014) does better, though it also seems to foreclose certain alternative approaches.107. Michael Sivak and Brandon Schoettle, transportation researchers at University of Michigan, recently published a report (Road Safety with Self-Driving Vehicles: General Limitations and Road Sharing with Conventional Vehicles, University of Michigan Transportation Research Institute, Report No. UMTRI-2015-2, January 2015, via http://www.driverlesstransportation.com/wp-content/uploads/2015/01/UMTRI-2015-2.pdf ) pointing out that fatalities will not drop to zero with autonomous vehicles.108. The engineers in question included none other than Werner von Braun, see David Mindell, Digital Apollo: Human and Machine in Spaceflight (Cambridge, MA: MIT Press, 2008), 67-68.109. Mindell, Digital Apollo, 5-6.110. See for example chapters 3 and 4 of William Clancey, Working on Mars: Voyages of Scientific Discovery with the Mars Exploration Rovers (Cambridge, MA: MIT Press, 2012).111. See Bureau dEnqutes et dAnalyses, Final report on the accident on 1st June 2009 to the Airbus A330-203 registered F-GZCP operated by Air France flight AF-447 Rio de Janeiro Paris, English Edition, BEA, June 2012.112. David Mindell, discussion with the author, September 10, 2014.113. See Thomas Sheridan Telerobotics, Automation, and Human Supervisory Control (Cambridge, MA: MIT Press, 1992) who discusses both the benefits of human supervisory control (314, 336) and the well-known dangers of automation that relies upon the human as a back-up (261). His section on automobiles and Intelligent Vehicle-Highway Systems lists essentially all types of vehicles safety systems currently under development, and notes that the road to full-automation is likely to be long, and some would say essentially unreachable (254).