The document summarizes research on modeling multiple sequence processing using an unsupervised neural network approach based on the Hypermap Model. Key points:
- The researcher extends previous models to handle complex sequences with repeating subsequences and multiple sequences occurring together without interference.
- Modifications include incorporating short-term memory to dynamically encode time-varying sequence context and inhibitory links to enable competitive queuing during recall.
- Experimental evaluation shows the network can correctly recall sequences using partial context and when sequences overlap.
- Future work aims to model the transition from single-word to two-word child speech and incorporate temporal processing of multimodal inputs like gestures.