AIwithoutFear(@JobstLandgrebe) 's Twitter Profileg
AIwithoutFear

@JobstLandgrebe

Jobst Landgrebe is a tech entrepreneur, AI researcher and philosopher.

ID:1258340544872427521

calendar_today07-05-2020 10:19:42

162 Tweets

217 Followers

24 Following

Shannon Vallor(@ShannonVallor) 's Twitter Profile Photo

Kristof No magic needed. Just real-world complexity of motion/change driven in real time that (like turbulence) resists detailed modeling at the level that AGI would require. Doesn’t mean that AI won’t keep advancing, in many new directions. But AI systems won’t replicate minds.

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

Some use the Church Turing Deutsch Thesis ('a universal computing device can simulate every physical process.') to support the possibility of AI. But quantum physics fails at many aspects of reality. Deutsch overestimates physics.

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

A reader of our book who does not understand the relationship of the human mind and the world: snodgrass.blog/why-machines-w…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

arcinstitute.org/news/blog/evo Evolution has partial ergodic patterns, and they can be configured into the models. Nucleic acids in isolation are inanimate, and they have regular patterns. It is a different matter altogether when energy flows through a living system.

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

LLM do not understand anything (as we know), here is an example from bioinformatics. See also our book 'Why machines will never rule the world' sciencedirect.com/science/articl…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

The AI hysteria is even driving mathematicians to worry about AI consciousness and ask for tests. We have shown it can’t be engineered or evolve in Why machines will never rule the world. nature.com/articles/d4158…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

Marcus is right here, what is sold as autonomous solution finding by 'AI' is just normal computational support for mathematics. There is no 'AI', obviously. A marketing term is sold as reality. See our book 'Why machines will never rule the world' garymarcus.substack.com/p/sorry-but-fu…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

It is now believed that so-called worldmodels by OpenAI (digitaljournal.com/pr/news/getnew…) have the strength 'to understand and simulate complex systems'. This is fundamentally wrong for thermodynamical reasons, as Smith and I show in 'Why machines will never rule the world'.

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

AI nutters around Levandowski founded an AI chruch. This is a revamp of the Church of Positivism founded by August Comte in 19th century, where faith was replaced by scientism. These new founders live under the blessing of total lack of education. zerohedge.com/political/form…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

My debate with transhumanist Susan Schneider on AI at the Soho Forum, Manhattan: youtube.com/live/1rnam1w8z…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

Gary Marcus does not see the absolute limits of AI in our inability to model the thermodynamic properties of complex systems like we( Barry Smith and I) do, but he certainly sees the symptoms of AI failures clearly: garymarcus.substack.com/p/rethinking-d…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

Many academic colleagues do not understand that we will never obtain synoptic and adequate models of complex systems due to the essential limitations of mathematics. Here is a recent example on AI and ecology from PNAS. pnas.org/doi/abs/10.107…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

ChatGPT and other LLMs are on the verge of failing. Ultimately, humans want texts to be in accordance with social norms. LLM cannot provide this experience since they only procude chains of symbols based on multivariate likelihood. And they can't be corrected at will.

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

Interesting pseudo-research about singularity fear. I would put it more simply: if you believe in the singularity or AI-doom, you lack knowledge or power of judgement or you have a strategic interest in letting others believe this (Grand Inquisitor like). sebjenseb.net/p/who-believes…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

Ray Kurzweil, a great technical pioneer of the second wave of AI, is either on psychoactive drugs or has lost his power of judgement by a physiological process such as aging. youtube.com/watch?v=ykY69l…

account_circle
AIwithoutFear(@JobstLandgrebe) 's Twitter Profile Photo

This is text by Scott Alexander is nonsense. We know exactly that AI and AGI are impossible. Therefore, we do not have to be afraid that it will become a moral subject. astralcodexten.substack.com/p/mr-tries-the…

account_circle