#About Us
#What we do
The San Francisco Voice Company builds observability platform for conversation handlers who run their own voice AI orchestration, develop voice models, voice agents.
We evaluate audios and make them searchable. A customer request simply saying "Your agent sounded wrong" - the normal debugging process is you will need to go through audio calls one by one, listen to each of them 3-20 mins to find "what does he mean by 'wrong'". It could've been fine if it's 5-10 calls a day - but the problem multiplies when you have 5k+ calls on your platform.
Like how Spotify indexing music and help us have "radio" of a song to help us find out how, when, why does a conversation go wrong and auto-label the time range. There is no existing tools on the market to tell us unless we re-evaluate, process audio buffers intentionally.
Traditional obversability tools filter, aggregate data via timeseries, or later ones index by call graph. We index by turn and corpus for our search engine while making sure things are locally optimised.
