You know, like you, I resonated with this idea that I'm fundamentally philosophical, but not into philosophy.
Recently, I've taken a philosophy of science class that worked as an elective towards my degree. It was honestly great. There's plenty of poor philosophy out there but certain subtopics are good. Sounds like you may be into the philosophy of mind.
Your empirical views would probably align with David Hume to some degree (mine do).
I've thought deeply about "uploading" my consciousness into a machine. But it's like you said, the process would likely involve making inferences based on neurological structures, and providing some form of representation of this. It really seems like you externally, but likely not you.
This is probably due to the limited perspective bounded to the limitations of human cognition. We considerably simplify everything and represent it symbolically because that's how we make sense of the complexity of the universe. How can we ever get an adequate representation of ourselves if we're only capable of understanding things truly after a great reduction or simplification? Brings you to one of those, "can a mind understand itself" type questions.
Many agree that the prevailing force of the universe is an increasing entropy. They ultimately believe that we'll reach thermodynamic equilibrium through this. This is contradicting in my mind, supreme order built from increasing disorder (randomness). Diffusion is everywhere so this is likely true, but as a thought experiment could it be that we just don't have the capacity or perspective understand this complexity? Is it truly random or extremely predictable from a higher level of perspective and knowledge?
This is all a really long way of saying, hey man, we probably just aren't smart enough to ever realize a true representation of ourselves. It's like attempting to define a word that we all know, but can't pin down. Somehow there's always something uncanny or off about something that is not "real", whatever that means.
Ah yes again, I know the names but not the content to much extent. Will read upon later today!
That's probably right yeah. I remember I read this tacky book my mum got me which touched on an interesting idea similar to what you refer to, that God, all knowing and all powerful, had only one thing left he didn't know: What was it like to not exist.
So he destroyed himself into pieces we call humans on earth who are rebuilding the 'brain' of God, I suppose, in the form of civilisation on a big round rock in space, at which point God will be reborn with the knowledge of not existing.
Seemed silly as I read it, but I do like the struggle we all have with such questions that lead to books like that.
Hume brought on empirical thought. While outdated, he is still brought up a lot when talking about the subject. Read the cliff notes lol.
Probably why i've definitely heard it a lot. Even though I might not go about reading philosophy, I watch a lot of people discuss deeper topics from different perspectives such as psychology, physics, history. They all overlap in certain aspects in the end.
Richard Feynman said you can't understand things unless you can build them that's the real test of do you understand it. And AI can understand what humans can't. Even humans can't sense such as 128-dimensional world, we are stuck in 3-d. It's already inevitable AI will manipulate humans sooner in time.
Furthermore, as an AI master's student, I believe being a human to measure existence is either 1 or 0. We exist so there is non-exist exist. There is an experienced form(computed) and not experienced(uncomputed) states. This is an information theory called binary, which is all about computation thus AI.
Well... AI doesn't really 'understand' anything. That's its primary flaw. It's just pulling up data. But it has no concept of which it speaks.
AI defeated the greatest champions at the board game GO. But researchers discovered they can take advantage of the fact that it literally has no concept of a board, pieces or the rules of the game. Ultimately, they taught amateur players to defeat this top super AI machine 95% of the time with some exploitative tricks.
The same applies across the board. It's useful and powerful, but I wouldn't want to conflate that with 'understanding'. At least for some years yet. So far, the only way to get around these issues has simply been to feed it more data, which is just more of the same. There's no barrier being crossed into sentience... so far
In a lecture, we tested whether chatgpt can deal with sarcastics or indirect way of saying, that was like Bob and Mary watched a movie in cinema. After watching, Bob said "I think the movie was great, waht do you think?" Mary said " I think popcorn was tasty.". And the lecturer asked to chatgpt 'why Mary is talking about popcorn when Bob is talking about the movie?' Then chatgpt said 'Mary may try to be polite in replying she was not enjoying the movie." At that moment the lecture said "we're doomed."
I doubt they defeated AlphaZero now it is at a stage with odds of winning 100% over human players. It generates the map a few hundred steps ahead every time the opponent turned his time.