Go Back   EcoModder Forum > EcoModding > Off-Topic Tech
Register Now
 Register Now
 

Reply  Post New Thread
 
Submit Tools LinkBack Thread Tools
Old 07-09-2021, 04:02 PM   #11 (permalink)
Master EcoModder
 
freebeard's Avatar
 
Join Date: Aug 2012
Location: northwest of normal
Posts: 27,671
Thanks: 7,768
Thanked 8,580 Times in 7,065 Posts
Quote:
A 'Watson', that's digested everything ever spoken or written about philosophical implications of technology, will have an upper hand. No human can possibly have a command of all that data.
RAND Corporation, NRO, NSA, CIA, FBI, Dr. Evil, would be candidates for early adopters.
Incorrect. It's Microsoft.

They assumed control of GPT-3.

Quote:
Originally Posted by DDG
GPT-3 - Wikipedia
https://en.wikipedia.org/wiki/GPT-3
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters.
[snip]
Microsoft announced on September 22, 2020 that it had licensed "exclusive" use of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3’s underlying code.[6]
That concern aside, GPT-3 is like the old Oracle of Delphi. Ask stupid questions, get stupid answers. But, frame the question properly....

__________________
.
.
Without freedom of speech we wouldn't know who all the idiots are. -- anonymous poster

____________________
.
.
"We're deeply sorry." -- Pfizer

Last edited by freebeard; 07-09-2021 at 05:17 PM.. Reason: GPT-3 for GTP-3
  Reply With Quote
The Following User Says Thank You to freebeard For This Useful Post:
aerohead (07-09-2021)
Alt Today
Popular topics

Other popular topics in this forum...

   
Old 07-10-2021, 10:22 AM   #12 (permalink)
Somewhat crazed
 
Piotrsko's Avatar
 
Join Date: Sep 2013
Location: 1826 miles WSW of Normal
Posts: 4,061
Thanks: 467
Thanked 1,112 Times in 981 Posts
Ummmm, so far you only get programmers bias, the AI isn't sufficiently intellegent yet to self program like a newborn human baby.
__________________
casual notes from the underground:There are some "experts" out there that in reality don't have a clue as to what they are doing.
  Reply With Quote
The Following User Says Thank You to Piotrsko For This Useful Post:
aerohead (07-21-2021)
Old 07-10-2021, 11:39 AM   #13 (permalink)
Master EcoModder
 
redneck's Avatar
 
Join Date: Jun 2009
Location: SC Lowcountry
Posts: 1,795

Geo XL1 - '94 Geo Metro
Team Metro
Boat tails and more mods
90 day: 72.22 mpg (US)

Big, Bad & Flat - '01 Dodge Ram 3500 SLT
Team Cummins
90 day: 21.13 mpg (US)
Thanks: 226
Thanked 1,353 Times in 711 Posts
Quote:
Originally Posted by aerohead View Post
One glaring attribute for the machine logic would be the speed at which threat could assessed, and instructions signaled for action to mitigate.
It's said that the computer is 10X faster than us.
Hierarchy of priorities will be interesting.
Do you dodge the pink-polka-dotted elephant that's just fallen from the palm tree, or pool of glycol which has just exploded from a ruptured radiator hose?
If a collision with a car full of kids is imminent, and you have a moment to react, do you steer for the nose of that car, passenger compartment, or trunk.
AI will have to make the right call. Something we might take for granted as humans. And something programmers must be realizing more and more as they attempt to get a CPU to 'think.'

Quote:
Piotrsko

Ummmm, so far you only get programmers bias, the AI isn't sufficiently intellegent yet to self program like a newborn human baby.



Programmers...











>

.
__________________


Woke means you're a loser....everything woke turns to ****.

Donald J Trump 8/21/21




Disclaimer...

I’m not a climatologist, aerodynamicist, virologist, physicist, astrodynamicist or marine biologist..

But...

I play one on the internet.

  Reply With Quote
The Following User Says Thank You to redneck For This Useful Post:
aerohead (07-21-2021)
Old 07-10-2021, 01:33 PM   #14 (permalink)
Master EcoModder
 
freebeard's Avatar
 
Join Date: Aug 2012
Location: northwest of normal
Posts: 27,671
Thanks: 7,768
Thanked 8,580 Times in 7,065 Posts
Quote:
Originally Posted by Piotrsko
Ummmm, so far you only get programmers bias, the AI isn't sufficiently intellegent yet to self program like a newborn human baby.
Babies inherit their parent's biases. Programmers are constrained to selecting the training data sets to be used.

Quote:
GPT-3 is a neural-network-powered language model. A language model is a model that predicts the likelihood of a sentence existing in the world. For example, a language model can label the sentence “I take my dog for a walk” as more probable to exist (i.e. on the Internet) than the sentence “I take my banana for a walk.” This is true for sentences as well as phrases and, more generally, any sequence of characters.

Like most language models, GPT-3 is elegantly trained on an unlabeled text dataset (in this case, the training data includes among others Common Crawl and Wikipedia). Words or phrases are randomly removed from the text, and the model must learn to fill them in using only the surrounding words as context. It’s a simple training task that results in a powerful and generalizable model.
....
But here’s the really magical part. As a result of its humongous size, GPT-3 can do what no other model can do (well): perform specific tasks without any special tuning. You can ask GPT-3 to be a translator, a programmer, a poet, or a famous author, and it can do it with its user (you) providing fewer than 10 training examples.
__________________
.
.
Without freedom of speech we wouldn't know who all the idiots are. -- anonymous poster

____________________
.
.
"We're deeply sorry." -- Pfizer
  Reply With Quote
The Following User Says Thank You to freebeard For This Useful Post:
aerohead (07-21-2021)
Old 07-12-2021, 10:51 AM   #15 (permalink)
Somewhat crazed
 
Piotrsko's Avatar
 
Join Date: Sep 2013
Location: 1826 miles WSW of Normal
Posts: 4,061
Thanks: 467
Thanked 1,112 Times in 981 Posts
Yes but on a truly abstract, there is a bias in selecting the training words.
__________________
casual notes from the underground:There are some "experts" out there that in reality don't have a clue as to what they are doing.
  Reply With Quote
The Following User Says Thank You to Piotrsko For This Useful Post:
aerohead (07-21-2021)
Old 07-12-2021, 02:02 PM   #16 (permalink)
Cyborg ECU
 
California98Civic's Avatar
 
Join Date: Mar 2011
Location: Coastal Southern California
Posts: 6,299

Black and Green - '98 Honda Civic DX Coupe
Team Honda
90 day: 66.42 mpg (US)

Black and Red - '00 Nashbar Custom built eBike
90 day: 3671.43 mpg (US)
Thanks: 2,373
Thanked 2,172 Times in 1,469 Posts
Quote:
Originally Posted by Piotrsko View Post
Yes but on a truly abstract, there is a bias in selecting the training words.
So, bias is interesting, but I think the presenter in the OP is getting at something much more fundamental than bias.

To summarize, there will always be true statements that cannot be proven because math is (1) incomplete, (2) undecideable, and (3) questionably consistent. It's Godel's incompleteness theorem. It's Alan Turing's "halting problem." So we are way beyond bias, I think.

The presenter claims that lots of systems of everyday life are undecideable, including ordinary daily things such as the game "Magic: the Gathering" or airline ticketing systems. So I think this partly means chaotic and unpredictable results can emerge without warning.

Are there aspects of car technology that are or may similarly be effected by undecideability, incompleteness, and inconsistency in the underlying math?

If the idea of a limit in calculus is "poorly defined" and there are different infinities in real and natural numbers and aspects of particle movements in quantum mechanics will always be incompletely described, what does it matter for automobile tech., either in thermodynamics or electronics or aerodynamics? Does it matter in any practical ways with existing vehicles, say for understanding breakdowns or repairs that sometimes seem like ghost in the machine event?

I guess I am just wondering why sometimes when something breaks there seems to be no explanation and why sometimes when I fix it, there seems no explanation. lol
__________________
See my car's mod & maintenance thread and my electric bicycle's thread for ongoing projects. I will rebuild Black and Green over decades as parts die, until it becomes a different car of roughly the same shape and color. My minimum fuel economy goal is 55 mpg while averaging posted speed limits. I generally top 60 mpg. See also my Honda manual transmission specs thread.



  Reply With Quote
The Following User Says Thank You to California98Civic For This Useful Post:
aerohead (07-21-2021)
Old 07-12-2021, 04:08 PM   #17 (permalink)
Master EcoModder
 
freebeard's Avatar
 
Join Date: Aug 2012
Location: northwest of normal
Posts: 27,671
Thanks: 7,768
Thanked 8,580 Times in 7,065 Posts
Gremlins. I always attribute it to Gremlins.

As for bias in training sets, oncet the AI is teasing out inferences that can't be predicted how do you pre-select a training set to give the result desired? Instead of it's opposite?
__________________
.
.
Without freedom of speech we wouldn't know who all the idiots are. -- anonymous poster

____________________
.
.
"We're deeply sorry." -- Pfizer
  Reply With Quote
The Following User Says Thank You to freebeard For This Useful Post:
aerohead (07-21-2021)
Old 07-13-2021, 10:06 AM   #18 (permalink)
Somewhat crazed
 
Piotrsko's Avatar
 
Join Date: Sep 2013
Location: 1826 miles WSW of Normal
Posts: 4,061
Thanks: 467
Thanked 1,112 Times in 981 Posts
three words random number generators particular the ones that are seeded by a traceable function. Run it once, look at the seed, track the results. Dump out the program, retry with a different seed, compare results.
__________________
casual notes from the underground:There are some "experts" out there that in reality don't have a clue as to what they are doing.
  Reply With Quote
The Following User Says Thank You to Piotrsko For This Useful Post:
aerohead (07-21-2021)
Old 07-20-2021, 12:24 AM   #19 (permalink)
Master EcoModder
 
Join Date: Jan 2012
Location: United States
Posts: 1,756

spyder2 - '00 Toyota MR2 Spyder
Thanks: 104
Thanked 407 Times in 312 Posts
Lol, as a former mathematician who works at an "evil" company and knows someone on the GPT-3 project...

Godel's incompleteness theorems are meaningless to discuss if you don't know the actual formulation involving models and such. Nothing real-world or even in theoretical physics is affected by Godel's incompleteness theorems (or Godel's completeness theorems for first order logic, if you care to go learn some mathematical logic and want to understand what they really say).

IMO AI is not as far along as people who are scared of AI Big Brother think, but it's further along than what humanists think. GPT-3 looks dumb but it could very well be only a few architecture steps away from fully understanding natural language. How long it takes people to figure out how to get it there is anyone's guess, could be a hundred years, could be 10.

  Reply With Quote
The Following 2 Users Say Thank You to serialk11r For This Useful Post:
aerohead (07-21-2021), freebeard (07-20-2021)
Reply  Post New Thread






Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Content Relevant URLs by vBSEO 3.5.2
All content copyright EcoModder.com