A binary decision tree is binary. A neural network is not. Quantum computers are not. Ternary computers are not.
So you are comparing Lambda's neural networks to neurons in human brain? Now I'm wondering what you actually do as you stated that you work in AI. I certainly would not give you the job
Machine learning. Comparing neural networks to the human brain is natural since the whole concept of computer neural networks is based on biological neural networks. Maybe instead of a (not so) clever comment, you could explain why you feel that way? I had really hoped for more from this thread. ps, it's LaMDA. When making sarcastic comments, it has more effect if you can spell.
Only the transistor is binary. But even then it's result is a variation of probabilities of having the right electrons moving at the correct time, to the correct place. The biggest issue with today's transistor manufacturing at such small scale, is to try to create a deterministic transistor, at a scale where particles are all but deterministic. Deep learning is not binary. The result is more of a probability of weighted results. A lot of neural networks even use sine waves to weigh results at some stage.
It's all binary. It does not matter how many branches you have, it is still binary. Just like your house has a two way switch for the upstairs/downstairs/staircase light, it is nothing but a binary system.
"Quantum computing does use binary as the gate model with binary basis states. They use a quantum circuit, and the gates modify not the usual binary 1 or 0 bits but qubits. Notably, the output of every quantum computation is either a 0 or 1." We have not escaped Binary processing until the output isn't binary. https://uk.finance.yahoo.com/news/scientists-create-quantum-computer-breaks-150454306.html
That's why I said that transistors are deterministic. Which is the hardware. But not the deep learning, which is the software. Deep learning is just a bunch of weighted probabilities. Even deterministic hardware can process probabilities in software.
You're wrong. I'd recommend you to check math behind Turing Machine and its implications. In short, with enough RAM, you can theoretically run any algorithm. All modern CPUs are representations of Turing Machine.
A 3-way switch is called a 3-way switch everywhere, a switch that has 3 positions. Often also referred to as a 1 - 0 - 1 switch. That image seems to be just an example of how to terminate wiring with 3-way switches in chain. You do not need a 3 way switch to turn lights on or off, a 2-way switch (on/off = 1 - 0) is enough.
Quantam computing is much like quantam physics. Three number digits in instead of two. 1 and 0 but can also be 0;
2 separate switches working one light needs a 3 way switch. ie: a switch at the bottom of the stairs and a switch at the top of the stairs working the same light.
No it doesn't a 3 way switch has two positions. It just has two outputs and a common. I.e 3 wiring points, a 3 way switch. Odd naming is odd but as Airbud states you need a 3 way switch too make an upstairs light work from either the downstairs or upstairs light switches. As for this entire AI becoming sentient thing. Do you seriously think Google of all companies would be keeping it quiet if it had come up with a self aware AI? Chances are by now Google would have set up some bespoke campaign group to show how much it supports AIs, AI lives matter perhaps? You know get right in on the ground floor with this we are doing the right thing campaign? Oh and Loopy is right doesn't matter how sophisticated the software is once you strip it down to the bare bones it's running on a binary yes or no system. I guess the question is weather the basic yes or no hardware combined with some sophisticated software would ever be powerful enough to deliver a self aware computer system. A more interesting question is weather or not we would be capable of determining if a computer system had become self aware. The core of this thought comes from how the computer system has been designed to run, if it has been designed to interact and interpret the world as a human and been given the sensory inputs to match our own then we could very well be capable of determining if it is self-aware. If however it has none of our senses then it isn't going to have any of our frames of reference. If it has none of our frames of reference, then its display of self-awareness may be massively different from our interpretation of it. Think of it in this massively idiotfied example. If our determination of self-awareness was the ability to describe the colour red how would a self-aware life form that doesn't have any eyes describe it to us? How would we describe it to that life form and expect it to understand what we are talking about?
ah ok, I thought it meant the number of positions. The Finnish term is practically "3 way switch" for a 1-0-1 switch.
Well, if you want to look at it like that. Neurons are also binary. Their ouput along the axon is either on or off. A neuron fires only when a certain threshold is reached from it's inputs. They are quite comparible to transistors in that way. If a collection of biological neurons in a particular network (brain) creates a self-aware human, why wouldn't the same network of neurons being emulated perfectly by software not do the same thing? Rember the chatbots that Facebook created which ended up creating and using their own language? Developers couldn't understand what they were saying to each other and so the experiment was shut down. That proved 2 things. Neural nets do things that we don't expect and can't understand. That was 5 years ago. LaMDA's processing capability, (the size, speed and complexity of it's neural net) must be orders of magnitude greater than the Facebook chatbots.