I am a member of the Telecom and Mobile Research group at IRL where I build technology that facilitates socio-economic development. I enjoy solving challenges that have both social impact, and academic relevance. For the last few years I have used Spoken Web as a vehicle to realise this agenda.
Most of my current work revolves around spoken document retrieval, a topic motivated by various other Spoken Web deployments. Within these deployments, we noticed users not only needing to search for spoken documents, but doing so using verbose spoken queries; much like they were having a conversation. Using concepts from query performance prediction and machine learning, we have been able to identify when to interrupt a speaker and present a relevant result [ref]. We have extended this work to low-resource, non-transcribed audio, using zero resource term detection; initial results are promising [ref].
From 2011 to 2013, I was a technical leader for the Smarter Employability Platform, a Spoken Web application that connected job seekers to job providers. The project started as a research agreement between IBM and the Government of Karnataka, and culminated with a state-wide launch in early 2013. Aside from acting as technical liaison to the government, I led research efforts around user studies [ref], issues of interface design, and technical aspects of candidate-employer matching [ref].