Vernor Vinge: Cheating at the Turing Test

Vernor Vinge on cheating at the Turing Test:

As with past computer progress, the achievement of some goals will lead to interesting disputes and insights. Consider two of [Rodney] Brooks's challenges: manual dexterity at the level of a 6-year-old child and object-recognition capability at the level of a 2-year-old. Both tasks would be much easier if objects in the environment possessed sensors and effectors and could communicate. For example, the target of a robot's hand could provide location and orientation data, even URLs for specialized manipulation libraries. Where the target has effectors as well as sensors, it could cooperate in the solution of kinematics issues. By the standards of today, such a distributed solution would clearly be cheating. But embedded microprocessors are increasingly widespread. Their coordinated presence may become the assumed environment. In fact, such coordination is much like relationships that have evolved between living things. [link added]

I don't think Vinge goes nearly far enough. I've thought for at least twenty years that our standards for machine sentience (or any non-human sentience, for that matter) were hopelessly anthropomorphic. For most purposes, "Artificial Intelligence" should have been called "Artificial Human-Like Intelligence."

I lack sympathy for the idea that we have only ourselves to judge it by. People have been dealing with beings of different intelligence for as long as there have been human beings. (We call them "animals.") I'd like to think we've figured out something by now -- or at least figured out how to figure it out -- about how to judge intelligence without provoking the shades of Alan Turing and G. B. Shaw to peals of ghostly laughter.

To Vinge's immediate point: If we require of the machine that it be like us, we are placing an irrelevant constraint on our evaluation of its intelligence. Imagine if machines were evaluating us -- think how short we'd fall in their estimation with our slow electro-chemical communications network, the lossy construction of our visual apparatus. And how slowly we caluclate!

Technorati Tags: ,

Down low...too slow! IBM is TranScalar Systems

The Advent of Malicious Circuits | Beyond the Beyond from Wired.com

(((There's a new one. You bake the malware right into the hardware, then release the hardware into the wild. With keystroke loggers, spamware and trojans built right into the chip itself, you're home-free against software-based detection.)))

(((How do you get victims to buy your subverted chips, though? That one seems pretty obvious: product forgery. Sell 'em your China-based Appl3 H-phone. If the price is right, they'll go for it -- and with the fraud money you'd pull down from a scheme like this, you could give the hardware away -- even pay fools to take it.)))

Many moons ago, I started working on a novel about a brainwashed super-agent. (Who bears an eerie and entirely accidental resemblance to Jason Bourne, of whom I was blissfully ignorant at the time. Not that he was entirely original; just that he was more like a mashup of Piers Anthony's "agents" and Graham Greene's "Professor D." But that's beside the point.) He would have been hoodwinked into joining, then trained and brainwashed and turned into a psychotic robot to be sent out on dangerous missions, then brought back and brainwashed again for the next job. (That's the part I cribbed from Anthony.) My first sketches on that idea date to the summer of 1980. They're pretty bad. In summer and fall of '80, I worked through two drafts of a novella that outlined that character and situated him relative to the great technical bureaucracy that I imagined him serving, and ultimately defying, like a cancer cell.

The character stuck in my brain and I started fleshing out the parts of the idea that had to do with how you would organize a great invisible intelligence enterprise. I'd created a pretty coherent vision of how the whole system worked and started sketching some much better stuff as long ago as 1984; I'd settled on the idea that his "master" was a sentient but profoundly alien AI as early as about 1988 or so; by early 1992, I'd worked out how the system-monster communicated without being noticed, by hiding its traffic as noise packets on the Internet; more sophisticated messages could be "book-coded" in Usenet messages. By 1995, while working at Kodak, I merged that nightmare vision with another, based loosely on the legend of Volund/Wayland Smith, and concocted a grand, long-range story of a conflict between two new sentiences, once accidental, the other planned, and neither seeming very human. The driver on the "planned" side was a small but powerful firmware vendor called TranScalar Systems, who designed chips for communications applications. Their chips were in everythign, and spyware was in all their chips. That gave their sentient monitoring system absolute control over dat streams (as long as the government-owned monster didn't realize it was there).

I'd done almost all of this while being more or less completely ignorant of cyberpunk. I didn't read any Gibson until about 1999, no Sterling until '98. So on the one hand, I reinvented some wheels. On the other, I had some ideas that I now know never really got much traction. The Sprawl trilogy, for example, is thick with deeply alien AIs, but that vision never caught on -- the moder post-cyberpunk transhumanist AI is cloyingly human, as a rule.

Since then, one by one, most of the stuff I dreamed up has hit the mainstream. In 1998's The Saint, Simon uses a Usenet-based "book code" to trade messages regarding his contracts. Jason Bourne, who'd been there all along, of course, entered my consciousness in the early oughts. And now I learn that the idea of hardware-embedding malware is finally making the mainstream. I'm a visionary without visible portfolio. It's my own damn fault for not writing it earlier, of course.

Shadow_modeFrom Usenix:

Abstract

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker, rather than designing one speci?c attack, can instead design hardware to support attacks. Such ?exible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware.

We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including a login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more ?exible, and harder to detect than an initial analysis would suggest.

1 Introduction

1.1 Motivation

Attackers may be able to insert covertly circuitry into integrated circuits (ICs) used in today’s computerbased systems; a recent Department of Defense report [16] identi?es several current trends that contribute to this threat.

First, it has become economically infeasible to procure high performance ICs other than through commercial suppliers. Second, these commercial suppliers are increasingly moving the design, manufacturing, and testing stages of IC production to a diverse set of countries, making securing the IC supply chain infeasible. (((Uh-oh.))) Together, commercialofftheshelf (COTS) procurement and global production lead to an “enormous and increasing” opportunity for attack [16].

Maliciously modi?ed devices are already a reality. In 2006, Apple shipped iPods infected with the RavMonE virus [4].

....

Using modi?ed hardware provides attackers with a fundamental advantage compared to softwarebased attacks. Due to the lower level of control offered, attackers can more easily avoid detection and prevention. The recent SubVirt project shows how to use virtualmachine monitors to gain control over the operating system (OS) [11].

This lower level of control makes defending against the attack far more dif?cult, as the attacker has control over all of the software stack above. There is no layer below the hardware, thus giving such an attack a fundamental advantage over the defense.

Although some initial work has been done on this problem in the security community, our understanding of malicious circuits is limited.

IBM developed a “trojan circuit” to steal encryption keys [3]. By selectively disabling portions of an encryption circuit they cause the encryption key to be leaked. This is the best example of an attack implemented in hardware that we are aware of (...)

Indeed, a single hardcoded attack in hardware greatly understates the power of malicious circuitry. This style of attack is an attack designed in hardware; nobody has designed hardware to support attacks. The design space of malicious circuitry is unexplored, outside of simple, specialpurpose, hardcoded attacks. Responding to the threat of trojan circuits requires considering a variety of possible malicious designs; further, it requires anticipating and considering the attacker’s countermoves against our defenses. Without such consideration, we remain open to attack by malicious circuits....

 

A Love Tap From Harlan Ellison

Paolo Bacigalupi, interviewed at Wired Science, on getting unsolicited feedback from someone who can't be ignored:

Harlan Ellison called me up out of the blue.  It was soon after the short story had come out and I was in my house mopping the floor and I get this phone call and this man on the other end was like 'This is Harlan Ellison, do you know who I am?' and I was like 'Yeah, yeah, um yeah.' So he says, 'Go get your story.'  So I do. He then proceeds to basically critique every single aspect of my entire story.

He starts out by saying 'At first I thought that you were some sort of professional writing under a pseudonym because, you know, nobody has a name like Bacigalupi, I know the Abbot and Costello routine blah blah blah...' He goes off about how Paolo Bacigalupi is obviously a pseudonym or a joke name of some sort. Now he's getting a bit worked up. He says, 'You know, I thought you were a professional, and then I got to page 5 and right down there at the bottom you used the word jerked... and then 2 sentences later you used the word jerky--you took all of the power out of the fucking word!'

I'm sitting there on the line sort of terrified of this man just haranguing me. At the end of that whole conversation - a conversation in which he critiques, line by line, my entire story - he finishes up by saying, 'Well you got some potential, but don't write in genre, it's a waste of time. Don't get stuck in it like I got stuck in it.' And then he hangs up.

That was the last thing that I heard from this guy--I don't know what it was--sort of like a love tap I guess, but I actually sort of got to me. I proceeded to write a bunch of stories that weren't science fiction. I wrote historical fiction novels set in China, I went on and wrote a landscape... I don't know what you call it... sort of landscape porn I guess is the best word for it.  You know, one of those love of place and the rural west sort of stories. Then I wrote a mystery/western story and none of those genres is related to sci-fi in any way, shape or form, and none of them sold.

At the end of all of that, I'm sitting there with all of the rejection letters in my hands and thinking: Well, you know, actually I kind of liked writing science fiction and then I went back into it and started doing the short stories, and that's when I started writing things like "The Fluted Girl," and "The People of Sand and Slag" and started finding my niche. It's been a long process.

Concensus Avatars, and Primary / Secondary Negotiation Phases

A bit from a talk last year by Vinge -- 3pointD.com » Blog Archive » Vernor Vinge Paints the Future at AGC:

I am convinced that the day we really get high resolution heads up displays, most people who nowadays are carrying a bluetooth earphone and microphone would have no problem with wearing eyeglasses that gave them a heads up display of something like 4,000 by 4,000 if the infrastructure had moved along in concert. Then high resolution HUDs could be exploited. ...

I'm unclear on what the infrastructure issue is, here. Bandwidth? That's only a problem if you assume one is transmitting bitmaps at full res in real time. I don't think that's likely, at least at first. Look to the way that humans transfer this kind of information via the extremely efficient lossy compression scheme we call "language".

.... That’s an example of a highly disruptive technology. It essentially destroys all other display technology except as emergency backups.

If you were able to get localization that was really good, you could imagine setting this up so that if your wearbale knew where you were looking, what the orientation of your head was and where your eyeballs were tracking, then in addition to being able to produce the world’s best display, as good as the worlds’ best desktop display, you could actually overlay things in the environment.

There could be some interesting localization artifacts, here, as different localization schemes amplified one another's errors -- or just introduced strangenesses. The avatar floating six degrees to the left or right of the speaker, for example. A person's modified nose floating in the air above his/her head. Again, this is an argument for a vector-based system, offloading the localization to the local system: Let my eyes localize the stuff, don't make the broader system do it.

The term for that in academic circles is augmented reality. In that situation, having the processing power that’s involved with the network infrastructure I just described becomes very very useful, because you could in an ad hoc way overlay those portions of reality that you wanted to.

In an auditorium like this you could make the walls look like whatever you wanted, you could make the speaker look like a clown, and since everything was networked, you and your friends could get together and agree on what things looked like. The notion of consensual imaging becomes very very important, and again this is actually a very disruptive technology, if it were finally to happen. It blows away all discussion of large three-dimensional display technologies.

This is the really, really fun part -- in a project I'm working on right now, I call this "digging". A technology's adoption phase has phases of its own, and the one that's most interesting to me right now are what I could call the primary and secondary negotiation phases. The primary phase is the part where the bleeding edge early adopters figure out amongst themselves what consistutes and appropriate use of the tech. The secondary phase happens while the tech is going mainstream, and involves complex interactions between the bleeding and leading edges and the critical-mass bulk right behind the leading edge.

It's really that critical mass bulk that will drive everything. I don't take it as given that as goes the bleeding edge, so goes the leading -- or as goes the leading edge, so goes the critical mass. These are different populations, and I think even a trivial reading of how trends develop is liable to show that the leading edges don't determine how the body of the wave flows, or even predict it, so much as they give clues. To understand those clues, you need to understand the composition of the bleeding/leading edge communities, and how those communities relate to the critical mass adopters.

Here's how I'm thinking the adoption phases are likely to shake out with regard to consensus avatars. In the primary negotiation phase, digging (i.e., hacking someone's consensus avatar) would be regarded as a form of vandalism; in the secondary negotiation phase, it would be regarded as 'play' -- at least, that would be the socially acceptable way to regard it. Anyone who persisted in regarding it as vandalism would be be regarded as old-fashioned, behind the curve. You'd have to buy in (at least superficially) to keep up.

Naturally this is just the tip of the iceberg. We would still have to figure out how augmented "reality" would play out for ordinary things -- things that are (superficially) not as charged as presentation of self.

The Zero Unemployment Wonder City of the Golden Anarcho-Capitalist Future

In a TED talk, Stewart Brand pointed out that all over the world, poor villages — the same villages that Jeffrey Sachs seems to want to preserve — are vanishing. The people who lived in them have moved to squatter cities, where, according to Brand, there is zero unemployment and a much better life. Because Jeffrey Sachs’ interest in poor African villages seems to be recent, I am not surprised that he may end up on the wrong side of the helped/didn’t help ledger.

Seth Roberts @ Scientific Blogging | The world's best scientists. The internet's smartest readers

There is "zero unemployment" in Brand's squatter cities because those who do not (or cannot) work, die. It's really got nothing to do with innovation or with economic growth or opportunity. It's got to do with it being a fundamentally unforgiving environment.

He may be right about the demise of experts (at least, if we forget for a moment that Brand et al are setting themselves up as really nothing more than alternative experts). But I don't really think squatter cities have anything much of value to tell us about it one way or another, especially if what we're relying on is the strange argument that they're wonderful places without human problems.

The germs of truth in Brand's arguments (yes, the countryside is depopulating, yes, people in squatter cities are continually innovating in response to the highly challenging survival environment) obscures a deeper truth: People will engage in endlessly inventive rationalizations to justify their activities.

Squatter cities are, more often than not, squalid places that are rife with disease, where the oppressions of tradition are replaced with oppression by the strong/clever/zealous/amoral. Better? Worse? And by what (and whose) criteria?

Are they also rife with innovation? Sure; it's necessary to survive in that kind of environment. Do people experience joy, happiness, wonder, and live full and rewarding lives there? Absolutely; people will tend to make a world where they can do that, wherever they live. This reactionary defense of "squatter cities", though, smacks of free-marketism at its silliest: That is good which provokes the most change. Amen.



Technorati Tags: , ,

Dialogics

Consider only the language. Or more precisely, compare David Chase's dialogue to Aaron Sorkin's dialogue. In Sorkin's shiny nonsense, people speak in repartee, and always find the words they need, and nothing insignificant, nothing tedious, is ever uttered. They talk as nattily as they look. Even their afflictions are oddly high-spirited, as coolness conquers all. There is not an unmordant or unmoralized second in anybody's day. Sorkin's phony people go from portentousness to hipness and back. They are the figments of a disastrously glamorous imagination, the polished puppets of a shallow man's notion of profundity. In The Sopranos, by contrast, there is no eloquence, even when there is beauty. Silences abound. These people speak the way people actually speak: they lie, and lie again; they hide; they repair gladly to banalities, and to borrowed words; they struggle for adequacy in communication; they say nothing at all. Their verbal resources are cruelly lacking for their spiritual needs. They cannot say what they mean, or they do not know what they mean. Their obscenities are their tribute to the power of their feelings: the diction of their desperation. When they reach for sophistication, they mangle it. Their metaphors are awkward and homely, as in Tony's climactic soliloquy in his therapist's office about getting off, and staying off, the bus. Yet all this inarticulateness is peculiarly lyrical, and deeply moving. It is also a relief from the talkativeness that passes for thought in fancier places. Words should be fought for.

Requiem for the Bada Bing

I love Sorkin dialogue, but the man has a point. It's a little like the old Hammett v. Chandler debate. When I want fireworks and lovely prose, I go to Chandler. When I want a real emotional connection to the material, instead of Chandler's nice, cool, upper-middle-class detachment, I go for Hammett.

"Shallow man" is a bit strong, though. It's not as though he's writing Nick and Nora Charles.

Technorati Tags: , , , , ,

The Dangers of Passionate Design, Part Some of Many

Natalia Ilyin's thoughts on passion in design, from Metropolismag, via Sterling:

The describing of oneself as “passionate” is pretty much a given these days if you’re in any sort of business. We get junk mail about passionate state representatives running for office, brochures from accountants passionate about filing our taxes; we find passion in plumbers and tree surgeons, and where I live we commute on the ferry with literally hundreds of passionate software engineers, sitting quietly in their clean jeans and fleece vests and Helly Hansen parkas typing away on their laptops. It’s a cliché, okay, but it is a particularly ironic cliché in the design professions, for if there is one single thing that our design language was created to eradicate, it is passion.

Passion is not enthusiasm. It is not love. It is not enjoyment, and it is not flow. Passion is an unstoppable overflowing of emotion that destroys in its satisfaction, that torpedoes lives and marriages and nations, that shoots husbands or coworkers or strangers in rage. It is the hot lava of the soul, and it burns what it pours over. It is not the positive team-building thing your sup­ervisor would have you believe. Passion causes wars and brutal killings and divorces, and has astronauts wearing Depends and the headmistresses of girls’ schools going to jail, and gets husbands run over in parking lots. To say that a bunch of software engineers or graphic designers are passionate about their work is to try to interject sex and confusion and addiction and desire into a kind of work that is essentially asexual, organized, left brain, and sober.

... It’s true: sometimes we like to give the impression of wild abandon à la Pierre Bernard—we design an edgy poster, use a disgusting photo to make a point, design a building that looks like a torso, string a cable in a weird way. But is that passion? Or is it calculation of the highest order—about exactly what will communicate our ideas to whom? Focus is one thing. Passion is another.

Taken as a whole, the last 100 years of design history can be seen as a violent abstraction from passion, from the bondage of longing, from needing. Certainly early Modernists professed ideals about community, sharing, and individualism. But they were afraid of passion. They had seen what it could do. And somewhere along the way, somewhere in the jockeying for position at the Bauhaus, design became a place where distance and aloofness became the ideal, where coolness and detachment became lauded, where human quirks and the admission of frailties became weakness.

It is our Nietzschean heritage.... What if today we got upset about what our client’s product actually does to the planet, what it will do to the landfill, or to the air, or to global warming. Oh, no. Let’s not think about that, it makes my skin itch. Just like our recent ancestors in Weimar, better safe than feeling. Better to be detached so that we can all go to the same party. We want to be close, but not so close as to feel too much. We want to be apart, but not so far apart that we feel alone on the planet.

[Scho­p­en­hauer famously imagined] a bunch of freezing porcupines: they have to huddle together for warmth, but if they get too close, they’ll hurt each other with their quills. If they stay too far apart, they’ll die of exposure. They have to find a place in between, where they are warm enough but aren’t being hurt by one another.

In our world all people balance distance and closeness. Designers do it for a living. What is more, we unconsciously model our social behavior on that of the designers who have gone before us. And at the end of that line are some porcupines who did what they could to survive in Weimar, who developed our rules of how a designer should act in the world, a social game that Helmuth Plessner once called—speaking of the larger social sphere—“an open system of unencumbered strangers.” For us it’s porcupines all the way down.

Enough With It!

Two aphorisms spring to mind: "Whatever is done out of Love lies beyond Good and Evil." (Friedrich Nietzsche)

And: "It is the air that connects us. / It is the air that separates us..." (Yoko Ono)

Residents of the Panopticon

What's it like to live inside the panopticon -- to actually live in a world where privacy is essentially a lost cause? I think we'll get it back, ultimately, but most likely in a form we wouldn't recognize.

Right now, though, it's as though forty years of thinking in urbanism had just never happened, as though Jane Jacobs had never so clearly and cogently illustrated that loiterers (of the right sort) can actually make the neighborhood safer by putting "eyes on the street" -- providing ready witnesses to any bad acts in the neighborhood. The best enforcement of civil behavior in any civil society is always the opinions of your family, friends and neighbors, after all. Even thugs usually care what the little old lady next door thinks of them.

We'll happily toss all that away given half a chance, if we can find something that speaks to our need for an 'emotional solution-design.'

Which ahead of the point, a little; the following is courtesy Threat Level, who also point to a BBC article about talking surveillance cameras in Britain. These "one-way voice intercoms" [sic] provide an ongoing commentary on resident behavior:

.... As Robinson’s 7-year-old son, Justin, was hanging outside near the window and talking with his mother, an unidentified voice boomed over Faircliff’s new intercom system.

“?‘Hey, you in the red shirt at 1432—step away from the window. This is private property. You’re under surveillance,’?” a woman’s voice said, according to Robinson.

Justin, clad in red, obeyed the order and stepped back onto the sidewalk. Robinson had heard similar commands broadcast at Faircliff in previous weeks, but she didn’t think the voice had been addressing Justin. Then her 11-year-old niece and 8-year-old nephew stepped outside.

“Then it was, ‘You in the yellow shirt, you in the white shirt—step away from the window. This is private property,’?” recalls Robinson. “It was unbelievable.”

....

In recent months, residents and guests alike who have violated the stringent apartment rules have been singled out over the intercoms and given orders such as “get off the steps,” “no chairs allowed in the playground area,” or, perhaps most common, “no loitering.”

Wanda Griffin, who has seen children ordered to not eat ice cream on their steps, says the hardiest residents respond to their unseen watchers with a flurry of f-bombs, which the intended targets can’t hear, and a pair of middle fingers pointed in arbitrary directions. The intercom directives have also kicked off a semantic debate at the complex: Is it possible to loiter in front of your own home, where you pay rent?

.... A favorite loudspeaker tale ... involves a recalcitrant, plump teenage girl who ignored several commands to stop loitering and get home. According to resident Deatra Brown and three other witnesses, a woman’s exasperated voice finally blurted over the speakers, “Get your fat ass off the corner!”

....

Seven-year-old Melvin Roberson (“boy with the red shirt”) says he and his friends rouse “the lady” when they play football and dodgeball and get too close to the apartment buildings. “They come on for nothing. They be describing your clothes and telling you not to be loitering,” says Roberson, who admits that he doesn’t know what the word “loitering” means. A few weeks ago, when the pint-size Roberson was trying to gain entry to a friend’s apartment, he was called out for hoisting himself onto a brick ledge to reach the call box; he says he’s too short to reach it otherwise.

Yvette Stephens (“You, in front of 1428”) was called out for sitting on her stoop as her laundry dried. Edna Avery (“Person standing in the doorway of 1430”) was called out for holding the door to her building open to allow two movers to bring a couch up to her unit. ....

What's the effect of this kind of life? No doubt the people who brain-farted the idea for htis kind of a system in the first place would respond at this point that they are putting eyes on the street, they're addressing "lifestyle crime" (littering, loitering, miscellaneous minor malfeasance), and that the net effect is to get, through technology, what Jacobs asked for in the 1960s.

But an honest appraisal would have to recognize that response as disingenuous. The voice is detached, judgemental, and doesn't brook response -- doesn't even afford it, since there are no pickups (that the security company is admitting to) on the cameras. It can't possibly work to provide the kind of human-scale, person-to-person interaction that happens in in the relatively messy but relatively safe neighborhoods of the real world.

It's clear that life under this regime pisses people off, at least. It breeds hostility, angst, anger, and pushes potential "offenders" to areas they believe to be outside of surveillance:

... [T]he hardiest residents respond to their unseen watchers with a flurry of f-bombs, which the intended targets can’t hear, and a pair of middle fingers pointed in arbitrary directions. The intercom directives have also kicked off a semantic debate at the complex: Is it possible to loiter in front of your own home, where you pay rent?

...

The surveillance has altered the way residents live and play at Faircliff, a 27-year-old housing project. On a recent Thursday afternoon, a group of about six young men have tucked themselves away in one of the complex’s few outdoor alcoves, drinking sodas and chewing sunflower seeds just beyond the bulbous black eye of the camera. They say they’re too old to hang out on the playground, and they would violate the rules of their lease if they were to sit on the apartment steps.

“We live up in this motherfucker, and we can’t even chill,” says an exasperated 17-year-old named John Joseph (previously called out as “guy in front of 1428” and “guy with the white shirt and blue jeans on”). “That’s what this motherfucker is—a jail.”

“Exactly. This place is Oak Hill,” says 18-year-old Rich Porter, referring to the District’s juvenile detention center.

....

No one at Faircliff knows for sure when they’re being watched or even where they’re being watched from. While some believe the monitors are on-site, the more likely scenario is that residents of different Edgewood properties are observed from the company’s Maryland offices. The company prefers to keep such things a mystery; Caruso would not disclose publicly when or where his employees are watching.

“This is an awful arrangement,” says Lillie Coney, associate director of the D.C.-based Electronic Privacy Information Center. “It will be almost impossible for there not to be charges of misuse of authorityÉ.You create that kind of power dynamic when [the speaker] is unidentified. You can hide behind the curtain and act out your aggression, whatever’s hidden in the darker part of whoever’s been given this power.”

....

Griffin, the Faircliff tenant, recently heard residents getting the “Bad Boys” treatment as she escorted some guests to their car after a get-together at her apartment. “That just pissed me off,” says Griffin. “That just tells me what you think of this property. My guests were like, ‘My God. Y’all are living like that up here?’ It wasn’t called for.”

.... Teenagers at Faircliff have started hanging out on the sidewalks and in the street, beyond the purview of the cameras, because they say there are few permissible places left to hang out. The stoops, for instance, are off-limits. Residents who drag lawn chairs outside, including the elderly, are told they’re violating their lease. And the new-and-improved complex came with merely two outdoor benches to accommodate more than 100 units. And while elementary-school-age children are free to roam the playground, they can’t stray far from the wood chips.

.... In the past two months, Stephens has heard the loudspeaker voices threaten to take photos of disobedient residents and hand them 30-day eviction notices. “And it’s so loud that everybody in the complex knows who they’re talking to,” says Avery.

.... More troubling, says [Charlene] Collins, is the demoralizing spectacle she witnesses from her porch. “You see these prison movies, where they give people orders out in the yard—‘Get off the steps,’ ‘Pick up that piece of paper’—and it’s exactly like that,” she says. “There’s never a ‘please’; it’s always a demand. How are these children being affected by this?”

But there's money to be made/saved, and expensive tenants next door to appease: The surveillance systems with their quasi-omniscient remote monitors are cheaper than real security guards (not to mention less likely to go native by actually getting to know people in the neighborhood). And the surveillance society can be viewed as a way of forcibly controlling the behavior of the unwashed who lived first in the neighborhoods where the new $400K condos are being built.

(Let's put away the notion, while we're at it, that $400K condos can "save" a failing neighborhood. The people who live in $400K condos are not the people who can save a neighborhood. The people who can save a neighborhood can't afford $400K condos.)

"Speaker of the House: When these cameras don’t like what they see, they let you know about it." Washington City Paper District Line

 



Technorati Tags: , , ,  

 

The New End Of The World (TNEOW)

John Crowley on The New End Of The World (TNEOW) :

.... The Former End of the World (the bomb) yielded its great books -- Riddly Walker the greatest, Canticle for Leibowitz etc., etc.  Now the New End of the World generates fictions that seem familiar but with no bomb to blame.  What has happened to the world?  All calculations of global warming suggest dislocation and suffering for the poorest peoples inhabiting seacoasts or places subject to desertification; temperate-climate inland rich countries (us, US) might not suffer so much -- nothing much worse than the Great Depression, some species loss, coastal cities abandoned over the course of a few decades.  Macarthy's and Craces worlds are utterly vastated.  Nothing left.  No explanation.  Crowds of aimless walkers -- no technology, no social structure.  Why does this appeal?

John Crowley Little and Big - ALmost Lost Book

Good question. I'm tempted to say that it appeals because the possibility is real, but as usual I suspect that the truth goes somewhat deeper than that. Total destruction seems to have some appeal in almost every age, even if the appeal is limited. Riddley Walker appeared in the Reagan Era, when lots of people thought the Bomb was a real possibility and almost everyone had given up on hope that the Bomb was survivable.

But Canticle for Leibowitz and The Road aren't the only examples of extreme vision. There are lots of post-apocalyptic visions that have the world well-populated. Paolo Bacigalupi's "Yellow Card Man" (and its related stories, which I haven't read) are set in a post-collapse world that's a filthy and densly populated mashup of low-tech and high-tech. Within SF, that seems to be the current trend, and in America at least we owe that vision frankly to the Cyberpunks, who combined an amazingly ahistorical vision with a sense of the inertia of society: The world does not collapse, because there are powerful interests who would suffer if it did. And they're not about to let that happen. (A little chaos, on the other hand, can be wonderful for business.)

To be sure, they didn't invent the idea of whimper-not-bang, but they surely made it credible. And there are lots of non-cyberpunk examples, like Keith Roberts' "The Comfort Station"/"The Lordly Ones", which made a huge impression on me when I read them back in 1980. Their world was strangely like our own, as I think of it now: One where political chaos and some unspecified general crises seem to be gradually grinding away civil society bit by bit.

But I digress, as usual. Why is the vision of a devastated, technologically-reset world so attractive to McCarthy, Miller, and for that matter why was it so attractive to Crowley? The most common response is likely to be some variant on the idea that we're wired (or at least imprinted) to respond well to the idea of rebirth. Lloyd deMause, for example, might argue that it's all due to the trauma of our last trimester in the womb, which climaxes in delivery from the foetid and suffocating womb-environment into a painfully bright but well-oxygenated real world.

And obviously destroyed worlds are only one aspect of the matter. Consider singularitarianism, which since Vinge's coinage has been embraced as a religion by some. Such a bizarre concept, that we must always seek to annihilate ourselves (that we might be reborn as something better). Almost quintessentially non-cyberpunk, in a way.



Technorati Tags: , , , , ,

Some Old Sawyer Riffs on Kurzweil

While looking for critical appraisals of Asimov's "three laws", I stumbled across an old (c. 1999) article on Robert Sawyer's website, a reprint of an Ottawa Citizen article from 1999.04.04. It's in the form of a News Hour-style dialog between Sawyer and A. K. Dewdney on the subject of Ray Kurzweil's The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and it gets off to a rip-roaring start as Dewdney righteously dismisses Kurzweil's extropian transhumanist vision:

A. K. Dewdney: In the virtual reality of Kurzweil's own imagination, his book has already had its closest encounter with reality. His vast compendium of bits and pieces of mostly imaginary technology, nurtured by a media that prefers to ignore the real work in artificial intelligence [AI], cobbled into a masturbatory engine of adolescent adventurism, is destined for a place in history beside the helicopter-in-every-garage and the paperless society. Kurzweil's book, which may also be read as a brilliant (if unconscious) satire on the spiritual vacuum of late Twentieth Century western society, also makes an attractive paperweight.

[On Ray Kurzweil's The Age Of Spiritual Machines]

The exchange ends up touching on a lot of my hot-buttons for AI and robotics. (Why don't American newspapers print this kind of stuff, by the way? I might be meeting Sawyer next weekend at a book signing, so I'll try to remember to ask him that.)

Dewdney on making a personal connection with a Turing Test-compliant machine:

…. The point is that there's a semantic difficulty here. "Intelligence" per se, is not the same thing as "consciousness." You can think unconsciously, for example. You can be unconsciously aware. But moods, feelings and perceptions are quite another thing. If such experiencings, called "qualia," are beyond computers by their very nature, it may well be the case that "intelligent" computers might pass the Turing test, but not for very long. Sooner or later, Kurzweil's computer (or human simulacrum) would seem, well, not quite all "there." As for uploading his mind, Kurzweil will probably not enjoy having eternal life as an unconscious entity. By the way, have you noticed there's a quasi-religious air about all this?

Indeed you can think unconsciously. I adhere pretty strenuously to the school of thought that conscious thought is not nearly as important as we perceive it to be, and it seems clear to me that there are lots of obvious examples of this in our everyday lives. Try to be very conscious of everything you do and think, and, if you're honest in the effort, you'll realize how much you do and think without really thinking.

It follows for me that "intelligent" machines can't be presumed to be conscious. Consciousness, in fact, isn't even a very interesting measure of machine intelligence -- at least, not consciousness as we normally understand it. What's much more interesting is the degree of competency at self-directed action.

Sawyer counters:

…Kurzweil is an evangelist for us transcending into another plane of existence — the virtual world inside the computer. Still, I don't believe there is anything divinely endowed about consciousness. If it exists as a real-world phenomenon, then it can be duplicated artificially. Yes, we won't be able to reproduce it until we fully understand the process, quantum mechanical or otherwise, that makes us conscious, but once we do, artificial consciousness will be possible, and Kurzweil's uploading-the-mind-and-soul concept will become feasible (although, granted, it may require a completely different sort of computer than the linear, digital ones we use today). Whether uploading one's existence is desirable is anther question, though. An uploaded mind would experience a false, computer-generated reality that, although it might seem absolutely real, would in fact be bogus. To me, virtual reality is just air guitar writ large; it's not how I want to spend eternity.

More to the point: Is there anything about the nature of human experience that's fundamentally different from machine experience? In fact, there are lots of things. I'll name just a few that seem to me clearly to have very basic consequences for human experience of ourselves:

  1. Sensory perception as a complex pseudo-analog process.
  2. Generally, the experience of pain.
  3. Specifically, the experience of physical want (hunger, cold, fatigue) as pain.

There are lots of other ways to spin this, but the thought experiment treatment that I always end up pulling first out of the hat is Catherine Moore's "No Woman Born". Moore's 'Deirdre' has been forcibly inserted into an extropian scenario, her fragile human body replaced by a powerful metal exoskeleton. Her experience of her reality is sufficiently different from that of other humans that she must maintain a careful, conscious apprehension of her own affect in order to seem human. In fact, she'll likely cease to be really human at some point, despite her best efforts.

In fact, though, it's not clear that the "reality" an "uploaded" mind would experience is in any serious way more "unreal" than the reality we experience, ourselves. It's not as though we get our reality un-edited, after all. Our visual data is massaged and restructured several times before it ever gets the attention of our conscious mind, and significant things (like the blind spot we all share) are edited out. And that's just the "raw" data -- that's not even beginning to account for the effect of our assumptions, conscious or otherwise. The point is that machine perception could in fact be superior to meat-perception: It could be closer to "real" than what we experience with our own eyes.

That is, if by "real", we mean "corresponding to the physical world."

In practice, 'real' means something subtly different: Correspondence to our evolved model of the physical world.


Technorati Tags: , , , ,

Syndicate content