Computer's Evolution - Can Computers Overcome Humans?
Can computers overcome humans?
It is often assumed that computers, machines, and robots would someday match, if not surpass, human intellect.
Surpassing humans would imply reproducing, achieving, and exceeding vital distinguishing characteristics of humans, such as high-level intellect connected with conscious perception.
Can computers and humans be compared?
Can computers develop consciousness?
COPYRIGHT_OAPL: Published on https://www.oapublishinglondon.com/pop/computers-overcome-humans/ by Alexander McCaslin on 2022-05-22T22:59:51.586Z
These are perplexing and contentious concerns, especially given the various underlying assumptions about brain comprehension.
Computers, machines, and/or robots may surpass human intellect.
A cognitive framework apart from anthropocentrism might help surpass human capacities.
Humans have emotional behavior, can dance, create, etc., in addition to logical activity. Intelligence isn't only logical, algorithmic, or rational.
The non-brain-like computer approach implies solely rational, logical, and computational intelligence. Can human traits be defined?
Futurists presume its existence but don't define it.
Autopoiesis refers to system self-reproduction and maintenance.
Morality and ethics are ways to discern between right and wrong actions and intentions.
Morality is context, subjectivity, and consciousness-dependent.
Animals can't connect intellectual and emotional reasoning to make moral judgments. Intelligence is a system's capacity to use its surroundings to accomplish a purpose.
This term includes robots and computers.
Human intelligence is the capacity to balance cognitive and emotional information processing to maintain autonomy and reproduction.
Turing proposed a basic interchange of words, questions, and responses.
A moral test requires intermediary processes like self-reflection, confidence, and empathy.
The brain's processing of information is yet unclear, and it may not be a digital calculation, or even information processing in computational abstract terms. For example, if the information is thought of as the content of a message, it would need a physical system to be disseminated. Computation is often defined as the syntactic and symbolic manipulation of information. In this definition, computing is a predictable and algorithmic sort of information processing. In the sense that it is not always feasible to assure what the net is learning, artificial neural networks are semi-deterministic.
The interplay of deterministic, semi-deterministic, and quantum computation/simulations might be connected with intelligence. One approach to introducing semantics and meaning to artificial networks would be to design interactions between subsystems in the setting of artificial neural networks. The parallel between a drum and the brain would be more appropriate than the one between the brain and a computer. Drums are dynamical systems that exhibit emergent and sub-emergent behavior. Neurons are never static, and their membranes exhibit variations that may nevertheless be instructive.
A computer-brain paradigm is no longer applicable, at least in the present context. Some brain talents may still be replicable owing to modern information processing formulas. New information ideas and foundations will also be required, particularly for comprehending the true language of brain cells.
Any attempt to make machines that are conscious and can do things that humans can't do should start with some of the definitions listed above.
AI already looks at things like how autonomous they are. Moral thinking is only possible if AI doesn't add consciousness, which is a key part of what makes a person a person.
There needs to be a more complete theory of consciousness that connects complex behavior to physical bases.
To put these theories into machines, we need neuromorphic technologies.
We can also think of sub-emergent properties as changes in plasticity that are caused by conscious or voluntary actions.
If someone wants a super-efficient machine, they would be disappointed because each machine would be a lottery, just like when two people meet.
Science shouldn't focus on human-centered assumptions or compare a machine's intelligence to human intelligence in order to make better machines.
Robots and machines won't be able to copy a certain part of a human being if they can't copy important parts of the brain's hardware, like those mentioned above.
One academic and moral goal is to figure out how to put real human traits into machines, for example as a way to stop using animals in experiments.
There are two ways to look at artificial intelligence (AI):
1) The biological-academic approach and 2) the efficient approach.
The goal of both methods is to make better robots and machines that can help us do important but hard tasks or make us better at what we do.
If the goal is to create conscious machines that are superior to people, then it follows that computers will never be able to surpass human capabilities entirely.
Alternatively, if a computer accomplishes this goal, that machine should no longer be termed a computer.
Some types of intelligence, however, would be more developed than others since, by definition, their information processing would be akin to brains with these constraints.
Whether AI will replace human employees presumes that AI and humans have the exact attributes and talents, whereas they do not.
AI-powered robots are faster, more precise, and more consistently logical, but they lack intuition, emotional sensitivity, and cultural awareness.
The first is the human ability to reason.
They claim that computers will never be able to reason intuitively because they only utilize rules, but humans use a delicate and sophisticated type of inference from experience.
Humanists, for example, argue that a machine could never be a decent doctor.
Human-level intelligence machines are on the horizon. It is uncertain if they will be aware.
Even the most complex brain models are unlikely to generate conscious experiences.