Why not? We have killer drones.
https://www.bbc.com/news/technology-63816454
Why not? We have killer drones.
https://www.bbc.com/news/technology-63816454
We did that with a barricaded shooter a few years back. He was holed up and in dialog with police on his cell phone. Battery was running low so they sent him a new one on the tracked robot. But instead of a battery, it was c4. Boom.
Laumer included a history of the Bolo as an appendix to one of his books. The Mark I is described as conventional large (150 tonne) tank equipped with various servosand mechanical devices to reduce crew requirements. It is developed around the year 2000 by the fictional Bolo Division of General Motors.[9]
By the time of the development of the 300-tonne Mark III, its AI allows limited independent action, and is powered by "ionic" batteries able to support combat-level activity for up to ten years and enabling operation even when fully submerged.
The AI increases until the incorporation of Psychotronic circuitry in the Mark XX leads to Bolos becoming self-aware and capable of fully independent operation.[6] The Mark XXVI is described as capable of true independent strategic planning, while the final standardised Bolo, the 32,000-tonne Mark XXXIII is described as fully self-willed and able to operate indefinitely without external support.
Simpler is better, except when complicated looks really cool.
because two can play at that game, or 4, or 6, or as many that can afford it. And many can.
And it won't be aircraft as we know them now………
See #3.
So.....Dad was the director of the Army's R&D Engineering Center Ft. Belvoir VA in the late 70's and early 80's... they had in development a robot quad-wheeler with a quad TOW missile launcher on top.
Top Brass at the Pentagon were not amused.... the program was killed!
Dead and done!
Skip
---This post is delivered with righteous passion and with a solemn southern directness --
...........fighting against the deliberate polarization of politics...
Remember Google is Skynet!
No mater how many really smart people say this is a really bad idea, we are going down this path.
If the AI machines are really "intelligent", and learn about the ways human beans treat each other and the natural world,
they will wipe us out at the first opportunity.
Maybe that is what keeps Henry the K aware at night - he fears that AI machines will ascertain what is in the darkness of his soul.
Here in little old Bradford there is a little company that started out making a robot tackling dummy for the Dartmouth football team. One of its descendants is a semi autonomous robot for training SWAT teams. In a house, it can hide, and shoot back with a paintball gun.
A robot or flying drone armed with cartridges of bear spray or skunk juice could be effective without bullets or bombs.
Presumably, either would be equipped with video and controlled by an operator.
I would expect the AI killer-bots to be fliers . . . easier technology.
The US has used drones to kill people over much of the world (see whistle blower Dan Hale), though not very AI-ish . . . . yet.
Watch the US react with righteous indignation when killer bots start coming home to roost.
I am surprised it has not happened already.
https://reason.com/2021/07/28/daniel...ths-in-prison/
Last edited by sandtown; 12-02-2022 at 02:19 AM.
Girls, girls, please.
We only assume this standard negative outcome because we've been trained by fiction writers and Hollywood, who need a bad guy as a default to the necessary tension automatically built in to any story telling because that is what is taught by millenia of the reader-publisher-writer conspiratorial cabal.If the AI machines are really "intelligent", and learn about the ways human beans treat each other and the natural world,
they will wipe us out at the first opportunity.
But, he said. Wait, wait!
What if the AI really is actually intelligent in the way we'd like it to be. The kind of AI that solves such biochemical problems as cancer and alzheimers, and manages our necessary resources like fresh food and water for all populations? AI that truly understands humans and it's own nacent relationship to it's creator?
Let us take a page from Mr Wilson's very reasonable book and understand that humans have for the last twenty thousand years of us modern types, been evolving our culture to be ever more inclusive, and using our intelligence to increase our understanding of our world and our place in it, making our groups, the essential part of what makes of a social species, ie, dependent not only on our individual selves but us as individual members of a group, into ever larger, more inclusive groups.
From small tribes limited by natural topography, intent on doing whatever carnage is necessary to ensure a food supply and shelter in the local realm, including murdering competing tribes in our valley, to nation states that behave in a similar manner with respect to other nation states on ever larger chunks of geography, and even groups of nation states with respect to 'outlaw' nation states, while using ever better methods, cultivated in part by knowlege shared by more and more individuals, to protect and nurture the individual members and smaller sub-groups within the nation states. The progress of humanity has been to become ever more caring for ever larger and more inclusive groups, which does an ever better and more efficient job of insuring the safety, longevity and higher quality of life for everyone. Everone except murderers, rapists, thieves and vandals, and other anti-social types.
With that perpective in mind, Shirley, much more dominant as a predictor of humanity's behavior than that of the entertainment mode of popular fiction, our AI, based on real human values, will certainly, absolutely, prefer the former, and self-engineer not to punish us for being savagely inhumane to each other, but rather reward us for the opposite and for bringing the inevitable development of the AI itself, which will assuredly continue to evolve to satisfy this basic attribute of humanity actually improving in it's necessary attribute of survival of both the individual and the society AND the invention of human protecting and enhancing AI—looking at you Mister Data—to include such things as inventing the necessary bypass of relativity's preclusion of interstellar travel ala Star Trek, and the other functions of developing a human off-planet, extra-solar system diaspora.
Of course it will. Why would we invent an AI that would evolve to become self-destructive, as attacking humanity surely would obviate the AI itself? That's self-destructive, psychotic to the extreme. AI will most certainly ensure our's and it's own survival in at least the same ratio as humans being pro humanity and not anti-social.
Your honors the defense rests. Bailiff will escort the deviant fiction purveyors out and engage auto-therapy. We'll recess for pastry and cocaine in the lobby before hearing the next case. One day, perhaps in my life-time, AI will invent a thoroughly entertaining fiction that doesn't rely on carnage and savagery. Away and begone ye crime drama, you serial rapist-murderers!
. . . not once they become self repairing, and self raw materials processing, and self chip making etc.
And BTW, AI is already writing fiction.
StoryLab is an AI-powered writing assistant that helps you come up with story ideas, outlines, and character profiles. It is designed to help fiction authors write stories by providing them with story ideas and outlines. The app uses a neural network to generate stories based on input from the user.
Since the police have used one to kill someone, no future hostage taker, active shooter, what ever will ever trust the police not to try again. They threw a way a negotiating technique.
Might have been a good option in Uvalde. No key required for the door, no bravery required by the operator.
And operating one step removed from the action, trigger happy US police might be a little more....restrained in their response. No need to shoot first any more.
As long as these things have a human operator, I have zero problems with the idea. In fact, it strikes me as the height of stupidity to expect people to front an armed offender when the technology to do an armed robocop is available.
Pete
The Ignore feature, lowering blood pressure since 1862. Ahhhhhhh.
When I first saw the thread title I thought it sounded like a completely autonomous robot with AI for a decision maker given an encounter that was probable cause for lethal force. If you put a trained operator as a remote driver and decision maker, then the robot becomes just a trigger extension with a most excellent sighting system. That's a whole nuther thing and pretty much already accepted technology, though not so far in domestic police work. It's not that different from having a drone pick off a terrorist leader in an automoble caravan in a distant territory.
But with an unassisted AI making the determination as to the necessity of lethal force and the consequent application, I pictured a Hollywood style robot (should be a humanoid or a Wall-E looking robot?) stopping a citizen and asking,
"Ma'am, where are you going with that coconut? Have you paid for that? Hands up. Turn around and spread 'em. Do it now, punk, lady. Hold still, and I'll try to find a banana for your monkey."
"Okay, people, stand back. This could get real. She's really asking for it. I'm going to make her head pop. See that red dot? Kids, get behind that curb. Do it now, punks!"
***
I can envision a labratory study with a sample of typical operator candidates for training, taking a survey, answering a long questionaire to gauge their responses to situations where the use of deadly force is possibly indicated, and compare their responses to some control group of regular cops and or cop candidates. Is it more or less likely that a robot operator would use lethal force than a human in situ in a given instance. I would not be surprised that the meat-based remote robot operators would be more trigger happy, among candidates with typical police recruit psychological profile. In a control group, one-year veteran officers might be more likely to use lethal force than fresh recruits.As long as these things have a human operator, I have zero problems with the idea.
What if a savvy and bold criminal element managed to get possesson of AI capable bank robbing robots? Picture that infamous LA bank heist running gun battle in the streets, but with a squad of AI robot cops exchanging gunfire with a contingent of bank robbing AI robots holding a hostage.
On Tuesday San Francisco's ruling Board of Supervisors voted to let the city's police use robots that can kill.
https://www.bbc.com/news/technology-63816454
Simpler is better, except when complicated looks really cool.