"A Maximalist Hypothesis: ChatGPT and other large language models, respectively, contain and understand all true humane knowledge, which is averaged out of their training data, as conceptual abstractions, concistent causal systems, and models of the world. This wealth of knowledge, however, materializes only when prompting. I'm not interested in the simplest problems a language model fails, but the most complex ones it succeeds or fails."

(Sivusto on rakenteilla, pahoittelut pienistä puutteista siellä täällä)




Janne P. Hukkinen

AI/AGI Researcher, independent

Working on

  • 2024 prospecting PhD studies
    • sensory, motor, and cognitive capacities
  • 2023 AGI (artificial general intelligence)
    • systemic underpinnigns, constraints, assumptions, and cognitive design tools
    • how much world knowledge can be pumped out of large language models?
    • how to orchestrate a modular cognitive agent / system?
    • world knowledge & understanding: representations in latent embedding space, how far we can get with tera byte language data, and respective Transformer/GPT-3 language models? When is embodied, enacted, and situated grounding needed, if ever?
    • do we need cognitive theory any more? how can it inform us?
      • theory, cognitive architecture, design & evaluation framework
      • what is known and unknown?
      • curriculum learning / critical developmental periods

Hands on



  • MA (cognitive science. minors: computer science)
  • BA


Twitter @Hukkinen. ActivityPub @HuK@mas.to. Youtube. Linkedin



  • email: Janne (a) inrobotico.com