Jehanzeb Mirza, Leonid Karlinsky, et al.
NeurIPS 2023
We recently proposed neuro-vector-symbolic architectures (NVSA) in which high-dimensional distributed vectors are properly generated by neural nets and further processed by a VSA-informed machinery at different levels of abstraction. Using NVSA, we could set state-of-the-art accuracy record on few-shot continual learning [CVPR 2022] as well as visual abstract reasoning tasks [Nature Machine Intelligence 2023]. This is not where the advantages end: NVSA also reduces the computational complexity associated with both perceptual and reasoning tasks, yet on modern CPUs/GPUs. NVSA expanded computation-in-superposition to highly nonlinear transformations in CNNs and Transformers, effectively doubling their throughput at nearly iso-accuracy and computational cost [NeurIPS 2023]. NVSA also made probabilistic abduction tractable by avoiding exhaustive probability computations and brute-force symbolic searches which led to 244× faster inference compared with the probabilistic reasoning within the state-of-the-art approaches [Nature Machine Intelligence 2023]. Finally, NVSA permitted learning-to-reason: instead of hard-coding the rule formulations associated with a reasoning task, NVSA could transparently learn the rule formulations with just one pass through the training data [NeurIPSW Math-AI 2023].
Jehanzeb Mirza, Leonid Karlinsky, et al.
NeurIPS 2023
J Knapman
Image and Vision Computing
Hagen Soltau, Lidia Mangu, et al.
ASRU 2011
Diganta Misra, Muawiz Chaudhary, et al.
CVPRW 2024