Welcome

"I believe (a hypothesis) ChatGPT and other large language models respectively contain and understand all the averaged true humane knowledge of its training data, as conceptual abstractions, concistent causal systems, and models of the world. This wealth of knowledge, however, materializes only when prompting. I'm not interested in the simplest problems a language model fails, but the most complex ones it succeeds or fails."

(Sivusto on rakenteilla, pahoittelut pienistä puutteista siellä täällä)

Announcement

!

.

Janne P. Hukkinen

Independent AGI Researcher

Working on

  • AGI (artificial general intelligence)
    • systemic underpinnigns, constraints, assumptions, and cognitive design tools
    • how much world knowledge can be pumped out of large language models?
    • how to orchestrate a modular cognitive agent / system?
    • world knowledge & understanding: representations in latent embedding space, how far we can get with tera byte language data, and respective Transformer/GPT-3 language models? When is embodied, enacted, and situated grounding needed, if ever?
    • do we need cognitive theory any more? how can it inform us?
      • theory, cognitive architecture, design & evaluation framework
      • what is known and unknown?
      • curriculum learning / critical developmental periods
  • prospecting PhD studies

Hands on

Publications

Education

  • MA (cognitive science. minors: computer science)
  • BA

Profiles

Twitter @Hukkinen. ActivityPub @HuK@mas.to. Youtube. Linkedin

Contact

Contact

  • email: Janne (a) inrobotico.com

P