AI developing to HL

engine driver

Suspended / Banned
Messages
291
Name
Peter
Edit My Images
Yes
I am concerned as to what I we can do with regards to Artificial Intelligence [AI] as it gains more power and becomes Humane Level {HL]. One suggestion is that do as we did with the development of Nuclear Weapons . Develop HL to combat the bad HL . I would say that we have had HI for a long time ,the thermostat for instance ,it told us when things got too hot or too cold and acted accordingly. Our modern motor vehicles have AI, even motor cycles .
 
I am concerned as to what I we can do with regards to Artificial Intelligence [AI] as it gains more power and becomes Humane Level {HL]. One suggestion is that do as we did with the development of Nuclear Weapons . Develop HL to combat the bad HL . I would say that we have had HI for a long time ,the thermostat for instance ,it told us when things got too hot or too cold and acted accordingly. Our modern motor vehicles have AI, even motor cycles .

There are a number of things that we can do to address the potential risks of AI as it becomes more powerful and human-like. Some of these things include:

  • Developing ethical guidelines for AI development. This would involve identifying the ethical principles that should guide the development of AI systems, such as avoiding harm, promoting fairness, and respecting privacy.
  • Creating safeguards to prevent AI from being used for malicious purposes. This could involve developing technical safeguards, such as security protocols and encryption, as well as social safeguards, such as international agreements and oversight bodies.
  • Educating the public about the potential risks and benefits of AI. This would help people to understand the issues involved and to make informed decisions about AI.
  • Investing in research on responsible AI. This would help us to develop better understanding of the risks and benefits of AI, as well as to develop more effective safeguards.
It is important to note that there is no single solution to the problem of AI safety. The best approach is likely to involve a combination of these and other measures.

Your suggestion of developing HL to combat bad HL is an interesting one. It is possible that this could be an effective way to prevent AI from being used for malicious purposes. However, it is also important to consider the potential risks of this approach. For example, if HL systems become too powerful, they could pose a threat to humanity themselves.

It is important to have a thoughtful and open discussion about the potential risks and benefits of AI. We need to be aware of the potential dangers, but we also need to be open to the potential benefits. With careful planning and foresight, we can ensure that AI is used for good and not for evil.

As for your point about how we have had HI for a long time, I agree. There are many examples of simple AI systems that have been around for decades, such as thermostats and motor vehicles. However, these systems are not what we are talking about when we talk about HL AI. HL AI is still in its early stages of development, but it has the potential to be much more powerful and sophisticated than anything that has come before.

It is important to remember that AI is a tool. It can be used for good or for evil. It is up to us to decide how we will use it. If we are careful and thoughtful, we can ensure that AI is used for the benefit of humanity.
 
Shades of the need to have an Asimov 'like' 3 Laws of Robotics. Talk about predicting the future.......

Sadly the human condition cannot fully mitigate for the bad & malign 'actors'. Was it the film Saturn 5 where the murderous robot was imbued with the consciousness of a maniac.

Oh, and the Star Trek episode where the computer logic saw the humans as a threat to the survival of itself.

The genie is out of the bottle so 'action this day' to quote :thinking:
 
"Artificial Intelligence" is just a mass of pattern matching algorithms.

Like so many things to do with computing, most of what non-specialists think they know is just marketing hyperbole. The rest is, in essence, clever coding which substitutes stored data for human decision making.
 
Yes well I have given further thought to the impending development of AI to the stage HL. Firstly I hope my mention of the simple thermostat did not bring some of you to look at the object with suspicion. Of course the thermostat is just device that makes use of physics ,expansion and contraction something that occurs around us every day and night and no doubt those of you in the UK may have experienced this with your current ''Heat Wave''.
We must not assume that as AI develops that it may not appear in the Humane form, it could be developed as an insect or even as a thought that is inserted into our process of thought . But then I must ask the question as to why would AI want to take over from us ,ie we humans . AI does not need territory ,it does not eat or drink,,it does not need the services of Doctors or hospitals. I wonder what would be it's motive for a take over or domination of the humane race.
 
I think you'll find that "human level" is a pretty low bar for intelligence :)

But then I must ask the question as to why would AI want to take over from us ,ie we humans . AI does not need territory ,it does not eat or drink,,it does not need the services of Doctors or hospitals. I wonder what would be it's motive for a take over or domination of the humane race.

It doesn't need to. Moral philosophers have often mentioned that humans are bad for the planet. E.g.


Let them watch the Matrix (and to be clear, ChatGPT absolutely *has* watched the Matrix) and give them access to nuclear weapons and it may not take very long.

I asked my current favourite AI to explain this to me

1689753207462.png

So we're probably safe for a bit. Unless somebody taught it to lie.

1689753268943.png

Phew.

Just to be sure, I cross checked with its younger, hipper cousin Perplexity if it wanted to take over the world.

World domination is a hypothetical power structure, either achieved or aspired to, in which a single political authority holds the power over all or virtually all the inhabitants of Earth
1
. It is not a desirable or ethical goal, and it is not something that I, as an AI language model, am capable of pursuing or advocating for.

And then I asked it to debate Agent Smith's point in rhyme.
Humans as a virus, a viewpoint debated,
Some argue our impact is overrated.
They claim we're parasitic, consuming without care,
A plague on the planet, causing despair[1].

They see us as a disease, a cancer to the Earth,
With no regard for nature, causing dearth[2].
But others say this view is misguided,
Blaming humanity alone is shortsighted[2].

Humans are part of nature, not separate or apart,
A multicellular organism, playing a vital part[2].
Viruses, once seen as mere by-products of life,
Now central to debates on our planet's strife[1].

Yet, it's important to note, ChatGPT's role,
In this discussion, it plays a significant role.
While it can provide information, it's not always right,
Fact-checking is crucial to shed light[3].

So, the trust in ChatGPT may be in doubt,
Its limitations and biases we must scout[5].
To form trustworthy conclusions, it's not the sole tool,
Verification from reliable sources is the rule[4].

In conclusion, the viewpoint that humans are a virus,
Is a complex topic, opinions diverse.
While some see us as a destructive force,
Others argue for a more balanced discourse.

Citations:
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2837877/
[2] https://www.bard.edu/cep/blog/?p=11973
[3] https://news.ycombinator.com/item?id=34640699
[4] https://www.forbes.com/sites/forbes...t-the-output-for-business-critical-decisions/
[5] https://www.digicert.com/blog/how-much-can-you-trust-chatgpt
[6] https://www.scribbr.com/ai-tools/is-chatgpt-trustworthy/
 
But then I must ask the question as to why would AI want to take over from us
You don't appear to understand that "AI" is just another form of data processing, dressed up to appear like a human being.

Just like any other computing system, it is subject to the GIGO rule: "garbage in, garbage out".
 
I think you'll find that "human level" is a pretty low bar for intelligence :)

[snip]

Funny you say that because last night I had the unnerving experience of witnessing a young women trying to start an automatic car wash with the clearly marked big red emergency STOP button. :facepalm:

She had a male friend or partner in the passenger seat who I presume was either too beta cucked to speak up and keep her right, or was equally as stupid. Until the store assistant's colleague came back from his break with keys the car wash was now out of order.

They walk among us.
 
You don't appear to understand that "AI" is just another form of data processing, dressed up to appear like a human being.

Just like any other computing system, it is subject to the GIGO rule: "garbage in, garbage out".

I paraphrase but I'm sure someone in another thread on here, possibly the Photoshop AI thread, made a good comment that AI will still need continued human input in order to provide the new material that "AI" will need in order to perpetuate.

As good as AI may seem, I'm still of the opinion that it will always need content to be creative and I struggle to see how it can be creative on the same level as humans, especially with thinking outside of the box stuff. Well, at least in the foreseeable future that is! lol
 
You don't appear to understand that "AI" is just another form of data processing, dressed up to appear like a human being.

Just like any other computing system, it is subject to the GIGO rule: "garbage in, garbage out".
Hmmm

You can Google this story to check the details.... But there's a documented case of two ais inventing their own language to talk to each other. Nobody had any idea what they are saying so they killed them. Iirc it was 4 or 5 years ago.
 
But there's a documented case of two ais inventing their own language to talk to each other. Nobody had any idea what they are saying so they killed them. Iirc it was 4 or 5 years ago.
Sounds like something of a tall tale to me.

If anything like that happened, I'm sure that something was written into the code that made it happen.
 
Yes well unfortunately you have missed the point that I was trying to make and that being, what will AI be like in say 5 or 10 years time? The development of AI has made giant strides. Were you aware say 10 years ago, that you would be able wear a watch that will, as well as telling you the time ,can tell you your blood pressure ,heat beat ,how many steps you have made today and so on and so on. I have just ridden a motor bike it has an inbuilt computer [ much to my dislike]. However that is AI.
 
Sounds like something of a tall tale to me.

If anything like that happened, I'm sure that something was written into the code that made it happen.
Not so , I have has the experience of hearing my twin sons at a very early age speaking to each other in their own language .
 
Sounds like something of a tall tale to me.

If anything like that happened, I'm sure that something was written into the code that made it happen.
You understand that a key feature of these systems is that nobody, on a traditional sense, writes the code...?
Yes well unfortunately you have missed the point that I was trying to make and that being, what will AI be like in say 5 or 10 years time? The development of AI has made giant strides. Were you aware say 10 years ago, that you would be able wear a watch that will, as well as telling you the time ,can tell you your blood pressure ,heat beat ,how many steps you have made today and so on and so on. I have just ridden a motor bike it has an inbuilt computer [ much to my dislike]. However that is AI.
Kind of. Those are mostly clever coding (or in the case of your watch, clever hardware). Marketing have realised ai is hot right now so they like to sprinkle "ai" on everything like my toothbrush. But even your bike is a fairly simple rules based system.

I'd encourage people to spend an hour with something like chatgpt. It's free and easy to start with. It's really quite interesting.
 
You understand that a key feature of these systems is that nobody, on a traditional sense, writes the code...?
All these AI systems are basically data storage and matching systems.

Their algorithms are descended from the pattern matching languages such as Lisp, which dates back to the early 1960s. They run a continuous loop, which invokes parent functions that can create, invoke and delete subroutines based on basic rules not much different from "IF A=B THEN C ELSE D".
 
All these AI systems are basically data storage and matching systems.

Their algorithms are descended from the pattern matching languages such as Lisp, which dates back to the early 1960s. They run a continuous loop, which invokes parent functions that can create, invoke and delete subroutines based on basic rules not much different from "IF A=B THEN C ELSE D".
Ah the closest I ever got to Lisp was hand coding flisp on paper in an exam. Lots of brackets. Fun times.

I mean, yes, they are descended from pattern matching in the same way that my Tesla is descended from the horse and cart my grandfather used to drive :) whether you want to call that evolution or revolution is up to you.

Nobody was ever worried about Mycin becoming self aware. But they did worry about it killing people.
 
Nobody was ever worried about Mycin becoming self aware. But they did worry about it killing people.
All the code I've seen basically comes down to the usual loops and jumps, though they're dressed up with fancy names. There's not a lot more you can do on any current digital computing system. What's more, they are written in any number of proprietary languages, that I suspect are less well tested than C, C++ or any of the interpreters like Perl or Python.

I've lived through lots of "revolutions" in programming and when the dust settles, the rubbish gets forgotten and a small number of new ideas become standard. I imagine this will follow the same pattern.
 
Back
Top