More than words, or, how to help a neural network understand learning

The way in which we got computers to work with words is impressive - as long as you don't expect more than you can get.

To simplify: neural networks got reasonably good with language as a result of isolating words, and encoding words as collections of vectors. This then allowed us to access the kinds of math that were needed to train the models and produce results at scale.

Now, how about learning?

If you were to simplify the learning process to something which words can fully encapsulate, then there might be a chance of replicating the kind of work that was done with language, in order to assist learners and teachers with some seriously adaptive learning. This, I suppose, is the obvious shortcut that all the AI and EdTech firms are now racing towards (before the money and goodwill runs out): just get the machines, learners and educators to talk it out, and consider the 2 Sigma problem solved, on the basis of this language exchange alone.

But we don't need to wait for this project to lose pizzazz, shrivel, under-deliver and fail, before grasping something else that can be done here.

If we were to do to the learning process what we did to language - that is, format it in a way which makes it more suitable to the vectors-layers-matrices-and-maths approach - then words are definitely not the only game in town. Off the top of my head, I can think of test scores, time of day / year, task type, energy/stress/eustress/motivation levels, previous exposure to the material, presence/absence of peers, rhythm and pace of revisions...

All these factors (or at least some of these) could be coded to help the neural networks train themselves to "predict the next" - opportunity for an intervention, possible mistake, difficult moment on a learning path.

You can see already why nobody's in a rush to create anything like this. In the machine learning race, language is the racecourse which we've all begun to understand, value (=monetize!) and pay attention to. This, after DECADES of work behind the scenes, and enormous investment (not to mention, questionable tactics of procuring enough training material!). If you explain to any machine learning professional what they'd have to do to make adaptive learning work, their eyes would glaze over, and they'd (reasonably) conclude that there's never going to be enough shiny ROI at the end of that tunnel.

But just because a project is too slow and cumbersome to be profitable, doesn't mean it should be abandoned. I'm looking forward to the time after the language-model hype: and to seeing whether smaller, local, less ego-driven and less power-hungry machine learning initiatives can begin to chip away at the adaptive learning idea.


You'll only receive email when they publish something new.

More from Vic Work: notes on learning, technology and play
All posts