07-09-2021, 05:02 PM
|
#11 (permalink)
|
Master EcoModder
Join Date: Aug 2012
Location: northwest of normal
Posts: 28,703
Thanks: 8,147
Thanked 8,925 Times in 7,368 Posts
|
Quote:
A 'Watson', that's digested everything ever spoken or written about philosophical implications of technology, will have an upper hand. No human can possibly have a command of all that data.
RAND Corporation, NRO, NSA, CIA, FBI, Dr. Evil, would be candidates for early adopters.
|
Incorrect. It's Microsoft.
They assumed control of GPT-3.
Quote:
Originally Posted by DDG
GPT-3 - Wikipedia
https://en.wikipedia.org/wiki/GPT-3
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters.
[snip]
Microsoft announced on September 22, 2020 that it had licensed "exclusive" use of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3’s underlying code.[6]
|
That concern aside, GPT-3 is like the old Oracle of Delphi. Ask stupid questions, get stupid answers. But, frame the question properly....
__________________
.
.Without freedom of speech we wouldn't know who all the idiots are. -- anonymous poster
____________________
.
.Three conspiracy theorists walk into a bar --You can't say that is a coincidence.
Last edited by freebeard; 07-09-2021 at 06:17 PM..
Reason: GPT-3 for GTP-3
|
|
|
The Following User Says Thank You to freebeard For This Useful Post:
|
|
Today
|
|
|
Other popular topics in this forum...
|
|
|
07-10-2021, 11:22 AM
|
#12 (permalink)
|
Somewhat crazed
Join Date: Sep 2013
Location: 1826 miles WSW of Normal
Posts: 4,425
Thanks: 540
Thanked 1,205 Times in 1,063 Posts
|
Ummmm, so far you only get programmers bias, the AI isn't sufficiently intellegent yet to self program like a newborn human baby.
__________________
casual notes from the underground:There are some "experts" out there that in reality don't have a clue as to what they are doing.
|
|
|
The Following User Says Thank You to Piotrsko For This Useful Post:
|
|
07-10-2021, 12:39 PM
|
#13 (permalink)
|
Master EcoModder
Join Date: Jun 2009
Location: SC Lowcountry
Posts: 1,796
Thanks: 226
Thanked 1,353 Times in 711 Posts
|
Quote:
Originally Posted by aerohead
One glaring attribute for the machine logic would be the speed at which threat could assessed, and instructions signaled for action to mitigate.
It's said that the computer is 10X faster than us.
Hierarchy of priorities will be interesting.
Do you dodge the pink-polka-dotted elephant that's just fallen from the palm tree, or pool of glycol which has just exploded from a ruptured radiator hose?
If a collision with a car full of kids is imminent, and you have a moment to react, do you steer for the nose of that car, passenger compartment, or trunk.
AI will have to make the right call. Something we might take for granted as humans. And something programmers must be realizing more and more as they attempt to get a CPU to 'think.'
|
Quote:
Piotrsko
Ummmm, so far you only get programmers bias, the AI isn't sufficiently intellegent yet to self program like a newborn human baby.
|
Programmers...
>
.
__________________
Woke means you're a loser....everything woke turns to ****.
Donald J Trump 8/21/21
Disclaimer...
I’m not a climatologist, aerodynamicist, virologist, physicist, astrodynamicist or marine biologist..
But...
I play one on the internet.
|
|
|
The Following User Says Thank You to redneck For This Useful Post:
|
|
07-10-2021, 02:33 PM
|
#14 (permalink)
|
Master EcoModder
Join Date: Aug 2012
Location: northwest of normal
Posts: 28,703
Thanks: 8,147
Thanked 8,925 Times in 7,368 Posts
|
Quote:
Originally Posted by Piotrsko
Ummmm, so far you only get programmers bias, the AI isn't sufficiently intellegent yet to self program like a newborn human baby.
|
Babies inherit their parent's biases. Programmers are constrained to selecting the training data sets to be used.
Quote:
GPT-3 is a neural-network-powered language model. A language model is a model that predicts the likelihood of a sentence existing in the world. For example, a language model can label the sentence “I take my dog for a walk” as more probable to exist (i.e. on the Internet) than the sentence “I take my banana for a walk.” This is true for sentences as well as phrases and, more generally, any sequence of characters.
Like most language models, GPT-3 is elegantly trained on an unlabeled text dataset (in this case, the training data includes among others Common Crawl and Wikipedia). Words or phrases are randomly removed from the text, and the model must learn to fill them in using only the surrounding words as context. It’s a simple training task that results in a powerful and generalizable model.
....
But here’s the really magical part. As a result of its humongous size, GPT-3 can do what no other model can do (well): perform specific tasks without any special tuning. You can ask GPT-3 to be a translator, a programmer, a poet, or a famous author, and it can do it with its user (you) providing fewer than 10 training examples.
|
__________________
.
.Without freedom of speech we wouldn't know who all the idiots are. -- anonymous poster
____________________
.
.Three conspiracy theorists walk into a bar --You can't say that is a coincidence.
|
|
|
The Following User Says Thank You to freebeard For This Useful Post:
|
|
07-12-2021, 11:51 AM
|
#15 (permalink)
|
Somewhat crazed
Join Date: Sep 2013
Location: 1826 miles WSW of Normal
Posts: 4,425
Thanks: 540
Thanked 1,205 Times in 1,063 Posts
|
Yes but on a truly abstract, there is a bias in selecting the training words.
__________________
casual notes from the underground:There are some "experts" out there that in reality don't have a clue as to what they are doing.
|
|
|
The Following User Says Thank You to Piotrsko For This Useful Post:
|
|
07-12-2021, 03:02 PM
|
#16 (permalink)
|
Cyborg ECU
Join Date: Mar 2011
Location: Coastal Southern California
Posts: 6,299
Thanks: 2,373
Thanked 2,174 Times in 1,470 Posts
|
Quote:
Originally Posted by Piotrsko
Yes but on a truly abstract, there is a bias in selecting the training words.
|
So, bias is interesting, but I think the presenter in the OP is getting at something much more fundamental than bias.
To summarize, there will always be true statements that cannot be proven because math is (1) incomplete, (2) undecideable, and (3) questionably consistent. It's Godel's incompleteness theorem. It's Alan Turing's "halting problem." So we are way beyond bias, I think.
The presenter claims that lots of systems of everyday life are undecideable, including ordinary daily things such as the game "Magic: the Gathering" or airline ticketing systems. So I think this partly means chaotic and unpredictable results can emerge without warning.
Are there aspects of car technology that are or may similarly be effected by undecideability, incompleteness, and inconsistency in the underlying math?
If the idea of a limit in calculus is "poorly defined" and there are different infinities in real and natural numbers and aspects of particle movements in quantum mechanics will always be incompletely described, what does it matter for automobile tech., either in thermodynamics or electronics or aerodynamics? Does it matter in any practical ways with existing vehicles, say for understanding breakdowns or repairs that sometimes seem like ghost in the machine event?
I guess I am just wondering why sometimes when something breaks there seems to be no explanation and why sometimes when I fix it, there seems no explanation. lol
__________________
See my car's mod & maintenance thread and my electric bicycle's thread for ongoing projects. I will rebuild Black and Green over decades as parts die, until it becomes a different car of roughly the same shape and color. My minimum fuel economy goal is 55 mpg while averaging posted speed limits. I generally top 60 mpg. See also my Honda manual transmission specs thread.
|
|
|
The Following User Says Thank You to California98Civic For This Useful Post:
|
|
07-12-2021, 05:08 PM
|
#17 (permalink)
|
Master EcoModder
Join Date: Aug 2012
Location: northwest of normal
Posts: 28,703
Thanks: 8,147
Thanked 8,925 Times in 7,368 Posts
|
Gremlins. I always attribute it to Gremlins.
As for bias in training sets, oncet the AI is teasing out inferences that can't be predicted how do you pre-select a training set to give the result desired? Instead of it's opposite?
__________________
.
.Without freedom of speech we wouldn't know who all the idiots are. -- anonymous poster
____________________
.
.Three conspiracy theorists walk into a bar --You can't say that is a coincidence.
|
|
|
The Following User Says Thank You to freebeard For This Useful Post:
|
|
07-13-2021, 11:06 AM
|
#18 (permalink)
|
Somewhat crazed
Join Date: Sep 2013
Location: 1826 miles WSW of Normal
Posts: 4,425
Thanks: 540
Thanked 1,205 Times in 1,063 Posts
|
three words random number generators particular the ones that are seeded by a traceable function. Run it once, look at the seed, track the results. Dump out the program, retry with a different seed, compare results.
__________________
casual notes from the underground:There are some "experts" out there that in reality don't have a clue as to what they are doing.
|
|
|
The Following User Says Thank You to Piotrsko For This Useful Post:
|
|
07-20-2021, 01:24 AM
|
#19 (permalink)
|
Master EcoModder
Join Date: Jan 2012
Location: United States
Posts: 1,756
Thanks: 104
Thanked 407 Times in 312 Posts
|
Lol, as a former mathematician who works at an "evil" company and knows someone on the GPT-3 project...
Godel's incompleteness theorems are meaningless to discuss if you don't know the actual formulation involving models and such. Nothing real-world or even in theoretical physics is affected by Godel's incompleteness theorems (or Godel's completeness theorems for first order logic, if you care to go learn some mathematical logic and want to understand what they really say).
IMO AI is not as far along as people who are scared of AI Big Brother think, but it's further along than what humanists think. GPT-3 looks dumb but it could very well be only a few architecture steps away from fully understanding natural language. How long it takes people to figure out how to get it there is anyone's guess, could be a hundred years, could be 10.
|
|
|
The Following 2 Users Say Thank You to serialk11r For This Useful Post:
|
|
|