[0934] Call Me A Skeptic
└ posted on Thursday, 2 November 2017, by Novil
- Article: “IBM: We increased the reaction capacity of our AI system by 100% this quarter!”
- Caption: What the public hears.
- Caption: What a software developer hears.
- Article: “IBM: We increased the efficiency of our AI system by 200% this quarter!”
- Manual: Compiler Flags | -o3: Activates full code optimization for release versions.
- Software developer: Ohh, interesting!
- Article: “Elon Musk: Tesla has made another breakthrough in autonomous driving!”
- Software developer: Our cars can now drive 100 meters in heavy rain before crashing into the next tree.
- Article: “Elon Musk: Our cars will be driving fully autonomously next year!”
What happened to BENJAMIN and LUNA?
Listen, and understand. That car is out there, it cant be bargained with, it cant be reasoned with, it doesn’t feel pity or remorse or fear, and it absolutely will not stop…EVER, … for a hundred meters that is.
They’re fully autonomous now as long as you don’t care about the condition of the passengers or cargo. Or whether or not they actually get to their destination.
They still have a better track record than any human driven vehicle, being the cause of, well, one, two accidents? All with a testing period of over five years. Are there still bugs? Yep. But at this point I’d trust them over the average asshole that cuts me off because they’re, well, that woman in the comic any day of the week.
Well I would fire the developer from first example… using else if ladder instead of a switch statement!!! The code is unreadable…
Second one, I suppose finally out of beta and no need to debug as much anymore so can enable optimization finally.
Third one… well autopilots nowadays are not so bad, but yeah they kinda still a lot of conditions in which it doesn’t work correctly.
Fourth one… Wishful thinking yep, we are Software developers, not wizards, complicated systems like that would require more than a year, especially if you consider current progress with it. Like proper image processing only would take a lot of time not to mention all other details…
It’s coming and you can’t stop it. Sooner or later driving on the road will be banned. And we’ll be better for it.
Elon Musk: I’m gonna go make a rocket.
What the public sees: rocket going into space.
What NASA sees: a rocket blowing up into a massive fireball.
@ nicktyrong:
I’m not really disagreeing with you, but last year ‘fully’ automated cars drove less than 1 million miles, a majority of which had human assistance. Comparatively, Americans drove a collective 2.5 trillion miles. Automated cars only had 1 death last year, partly due to user error, but if you scale that upwards by mileage, even that one death is still too much.
You missed a lot of joke potential about Elon musk and the hyperloop 🙂
@ Trimutius:
Um, code much?
Trimutius wrote:
Um, code much?
All-Purpose Guru wrote:
Well my job description is: Senior Software Engineer (they started renaming Software Developers to Software Engineers some time ago, because it sounds more fancy) So yeah I do that a lot, not to mention that coding is also one of my hobbies too…
Weird, I remember how this joke would’ve looked in 2010:
>Elon Musk: “We’re going to build reusable launch vehicles.”
>Public hears: [1940s sci-fi rockets]
>Rocket engineers hear: [Explosions]
Funny how that turned out.
Fact of the matter is, mile for mile, Google and Tesla’s cars outperform human drivers by two orders of magnitude. A car accident involving an autonomous car is so rare that they’ve been international news the few times they’ve occurred (and even then, it was human error). A human-driver accident doesn’t even make the local paper unless it causes a traffic jam.
You’re a skeptic.
@ xthorgoldx:
Well, you’ve kinda hit the crux of the problem right on the head: it’s not really the programming itself that’s the problem, it’s the human element. The one that sometimes would want control of the vehicle and other times would just like to sit back and watch a movie. From a computer perspective human interactions are very irrational and an ai would be required to be able to compensate for that, not just over one trip but over the entire lifespan of that vehicle, or at least the entire lifespan of it’s hardware components.
I asked my programmer cousin if fully autonomous cars were coming soon and she laughed too!
Programmers alone won’t be able to bring in fully autonomous cars to everywhere. You would need a bunch of lawmaking and politicing to tone down and remove the human element as much as possible. A highway that is trafficked by 100% autonomous vehicles would likely be one of the safest, fastest highways in history, but you bring in 1 human, and who knows what would happen.
Sure, they can handle regular traffic well, but even a rainstorm shuts the system down.
And when the choice of swerving instead of hitting the vehicle that ran the stop light, what will it do?
What about random potholes or a toy in the road; will the car just stop, or would it straddle the hole like a real human would?
Anyone here actually ridden in one? Parking Assistance Mode doesn’t count BTW.
Pelendones wrote:
The Simpsons already did it superbly decades ago (“Marge vs. Monorail”).
I think the fine print for #4 is “on certain types of roads”. For example, some cars are already “fully” autonomous on interstate highways.
YouTube has some rather excellent videos on this topic, I’d suggest watching channels like CGP Grey, Veritasium, SciShow, and TED Talks. TED Talks especially, they’ve got good videos detailing the logistical difficulties of programming driverless cars.
I ‘m also curious about Novil’s take on the artificial network that was able to master ‘Go’ in 3 days. https://arstechnica.com/science/2017/10/new-neural-network-teaches-itself-go-spanks-the-pros/
@ MidoriLuna:
That’s so funny, I was actually going to mention AlphaGo Zero. Because another site described it as being “similar to an alien species developing its own mathematics”. That’s how the public sees unsupervised learning, even though the biggest breakthrough AlphaGo introduced, originally, was the ability to learn from humans, when almost all previous AIs were self-taught (or rigidly designed).
In fact, the fact that AlphaGo Zero outperforms AlphaGo, puts AlphaGo’s entire raison detre in question. It’s on the level of when Support Vector Machines beat Multilayered Neural Networks, which did less to show the power of SVMs, and more to show how fundamentally flawed the MNNs of those days were.
Should be -O3, not -o3.
Also: I can’t agree with you about Elon Musk. People made all the same jokes when he said he would land a rocket on a drone ship. And autonomous driving has been done quite well by others (albeit with expensive LIDAR and such), and his cars already do pretty well, even if not perfectly. There are some issues but not to such an extent that it’s crazy.
The only crazy thing about Musk are his timeframes. You have to multiply by 5 every single time span that comes out of his mouth.
What Calvin hears: *insert image of some noodles*
Frogspoison wrote:
Umm… the eX-Driver anime?
The public does not see what’s the left of panel 4. Maybe they see what’s on the left of panel 3 again, maybe the see George Jetson’s flying car just because. But they aren’t seeing mathematical equations anymore than they’re seeing code or math in the other panels.
As I software developer, I have to agree with ALMOST all of your pairings. Though the first autonomous vehicles are likely to be trucks (as in, “big rigs”).
“default” is a keyword in several relevant languages. I would expect something like “defaultresponse()”
xthorgoldx:
I do remember one accident where the AI was at fault. It incorrectly predicted a bus driver’s actions.
@ xthorgoldx:
Two problems here:
One, Tesla DOES NOT SELL AUTONOMOUS CARS. They have a drive assist that only works in specific situations. E.g. they ask you to use the “autopilot” only on highways and similar roads and use basically the same technology as all the other car companies in their high end models. The inky differrnce is, that they are less conservative in giving their customers access to these features.
Two, every Google car still has a human driver that monitors everything. If they say “We only had two accidents” they actually mean “two times the car made a misstake and the test driver didn’t react in time”. What this doesn’t tell you is how many times the person reacted in time and prevented an accident and how many times the AI got cobfisued and asked the human to take back control.
I can only see the reaction capacity increasing by 100%, not 200%… damn those salesmen and their exaggerated figures! 😛
@ Frogspoison:
You know, whenever I hear “fully automated highways” i see one car malfunctioning and crashing into another, then another and another and once the entire system realises this there would be a pileup of cars on the road effectively creating a roadblock. Now maybe that wouldn’t happen as the AI would be able to compensate for that but I always see that. Also with a 100% automated highways there’s more risks from bugs and, God forbid, subautomic particles from space crashing the systems. Also there’s the risk from hackers and terrorists.
As someone working with a company that develops self driving cars – a year might be optimistic, but two seems absolutely realistic at this point. You’re full of it, you know?
@Novil: you once said you’re not a big fan of XKCD, but maybe you like this one:
https://xkcd.com/1897/
Not sure i agree but then again i am planing a vacation to learn tensorflow after getting a GTX 1080 TI for that pirpose (i already had asewsome framerate and ultra high graphics settings before as i had a R9 390X prior to my GTX 1080 TI…
I got it mostly because i want a second gaming pc , it came with a game i could hand over to my nephew (not working out super well due to fucking beurocrasy and bullshit.)
and for tensorflow… if anyone is gonna start the robot apocalypse then its gonna be me!!!
(though first i am gonna teach my AI o surft porn… that should slow down the whole robot apocalypse a bit… (I figure a giant porn hord is a good useful dataset i can experiment on))
Also note my main objection is that your examples are using common flow statements like switch cases…
and that is more in the flavor of game AI which typically use more cheap tricks :p
(oh and they often just raise the enemy HP then claim they improved the AI because the customers will think the enemies are smarter)
Though some gaming AIs have been rather complex… like in black & white where they had a unexpected bug…
the creature was hungry….
it knew that it killed things it wanted to eat to eat them…
it knew other creatures where trying to kill it…
it reasoned that it must be food and ate itself.
Null exception error thrown….
ok so maybe it did not exactly reason it most likely only re-categorized itself as food and choose to interact with itself as a food item…
From what I see, the problem is more that we want and need to travel that much, that fast.
Were we to slow down and live a peaceful life in the country, growing stuff to eat ourselves, and working, say, only half our days somewhere near and going there via bike or trains… I figure there would be less car accidents. As well as anxiety and absolutely unnecessary trade and “goods”.
I blame the artificially induced customer demands, money and general human greed.
Note: I didn’t mean that trade at large is unnecessary, I meant the (quiet likely large) part of trade that is not needed.
Veyraa wrote:
What is actually seen: Fireball … Fireball … Fireball … Success … Fireball … Success … Success
The recent launches have gone quite well. I think SpaceX is on the right path. After all, it’s not too difficult, just rocket science.
@ Mxax-Ai:
everyone trying to make a rocket will have lots of them blowing up…
that is part of the learning curve… the question is if elon musk can afford the learning curve.
@ Mxax-Ai:
Mxax-Ai wrote:
Yes, rocket science is not brain surgery. On the other hand rocket engineering is far more complex than rocket science.
@ MidoriLuna:
And how many miles were driven the previous years? They’ve been testing these things for at least three years, if not more. They’ve proven themselves. User error? So… that asshole that cut me off? Again, still trusting the computer more than the average douche on the road.
Yeah, pretty much.
I am a software developer and I can confirm this.
@ nicktyrong:
That’s the thing, they haven’t really proved themselves yet. Yes the technology is viable, and eventually it will work, but self driving cars still experience a great number of ‘disengagements’ where a human is required to take over for the computer. In testing this is normal, but in regular commercial use any one of those disengagements could have disastrous consequences.
Trimutius wrote:
Considering that they call a function named “default”, they’re probably working in a scripting language that doesn’t have a swtich or case statements.
As a software developer of 13 years that has done work in AI, I have to point out, AI is not a series of conditionals (if statements). It’s an algorithm that detects the local minimum in an n-dimensional manifold where n is the number of input data points. It is just an algorithm at this point. All fears of AI taking over the world are unfounded and a little silly.
Ugh, I always hated the concept of self-driving cars. They’re always designed without any kind of steering wheel for manual control. Anyone with half a brain ought to realize that full automation is actually less efficient than partial or optional automation; all you have to do is drive out to a house in a rural area or try to drive during a block party or whatever and suddenly boom, self-driving cars are useless without some way to take control of them yourself.
Sure, they’re theoretically useful for cargo transport on city streets or even highways, but it always bugs me; it’s like a HCF situation waiting to happen.
@ Trimutius:
Switch statement more readable than if statements. That’s a new one. Well AI uses neither and it is not readable unless you understand higher dimensional calculus.
@ countzero:
If you want to see how many times the safety drivers took over, this page has all the disengagement reports submitted to California for 2016: https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/disengagement_report_2016
@ nicktyrong:
I’m inclined to agree. I’ve worked on safety critical systems – think preventing several tonnes of machinery going through somebody’s head in the worst case scenario – and now that the technology is there I think self driving cars will happen faster than people expect. But not from Tesla. Toyota, VW, Daimler, GM, Ford – the companies with the huge resources and deep pockets, not a bit of venture capital and an annual loss.
Kipuna wrote:
Fixed. I initially wanted to add more additional else if, but the text got way too small. And then I forgot to adjust the headline.