A new friend and I were recently forging our relationship with deep thinking about some of our shared “futurist” interests.
He asked me an amazingly poignant question.
Would I trust an AI agent with my medical data?
My first answer was off the cuff: “Depends who’s vouching for it. We benefit from the UCSF medical system for our kids - and if they’re suggesting it, then I know [ok I’m assuming] that there’s been some appropriate vetting done.”
A pause as I thought longer.
“Actually, we encountered a medical challenge recently (which I proceeded to explain). If the agent could help unravel medical mysteries, then yes.”
And then overnight, as I reflected on my self-directed prompt of is there an interesting bit of thought leadership to tackle here I realized there’s a bigger question at play:
What exactly is digital trust, and how important is that in productization?
So let’s go on a mini-journey together, to unpack some interesting topics:
- Catalyzing adoption despite discomfort - a personal story
- The importance of the introduction in trust
- Can you win without trust?
Catalyzing Adoption Despite Discomfort
As someone that’s inherently skeptical of things and likes to pick apart problems, real “trust” doesn’t necessarily come easy.
How many times over the last few years have you gotten notice of a “data breach” from a company that you trusted with your information? Or that was simply a data processor? I don’t imagine there’s a single person out there that hasn’t gotten burned at some point based on “trusting” something or someone.
And yet, despite burning our hand on the stove, we still find it in ourselves to trust again.
- Perhaps not as easily.
- Perhaps knowing we’re likely to be burned.
- But trust we do.
So I asked why? What would it take with me to truly trust a solution knowing that there was so much inherent risk in doing so.
Last year we faced a medical issue in our family. The type of thing that scares me to talk about - that I’m reticent to even share, save for being cagey.
We pursued months of specialists and tests - simply looking for a diagnosis.
One by one, we ruled out all kinds of horrendous things.
And at the end of the day our journey ended with - we don’t know what it is, we’ve ruled out most all the “biggies”, but it’s trending better, not worse. So be happy with not knowing.
It’s a conclusion that kills me inside - is this truly the best outcome of modern medicine?
No. Simply No.
But it’s a product of the system.
After one, two, more months of waiting, you get to your appointment with a renowned specialist, who has, perhaps 5 minutes to review the latest diagnostic tests - even less so time to review the notes from the myriad of other specialists. They rapidly do the consult, and with nothing glaringly wrong, and without an instant epiphany, you get, at best, more tests, at worst the declaration “good news, it’s not X, Y, Z”.
These brilliant minds have wisdom and experience. But circumstances mean they lack the time to process all the data at their disposal - to identify those minute details that might mean something (other than the big things they’re really looking for).
This is the perfect type of problem for an AI augmentation - to process the reams of data, conduct differential diagnosis, surface little known journal articles or cases that might bear resemblance, and then arm those specialists.
Back to the topic at hand - having reached the end of the journey, and imagining an AI agent that, for the exchange of my privacy (and potential leaking of our medical data) could have surfaced an answer that would have created a different outcome - in a heartbeat I’d have made that tradeoff.
And that became my aha realization.
There’s a host of problems out there for which the pain exceeds the discomfort of real, or even just potential, broken trust. Addressing those problems is more than sufficient to catalyze adoption. Leading us to the real takeaway to consider:
How painful does a problem have to be before we’re willing to embrace discomfort to have it solved?
The Importance of the Introduction in Trust
I reach out to people cold ALL THE TIME.
If you’re doing something interesting… you are knowledgeable about a topic I want to learn more about… you’re involved in an area where I have a passion… I want to learn about your company… I think you have cool hair/t-shirt/gave a fascinating talk - I’ll just reach out and ask for a coffee (promising only an interesting conversation in exchange!)
You ought to be flattered!
Perhaps you won’t be surprised that despite, what I think is my amazing background and achievements, and clearly not being an AI bot or someone trying to sell you a franchise (or other product/service), the hit rate is really low.
And it is of course tautology to say the hit rate is orders of magnitude better if someone made an introduction.
We often talk about time as the valuable commodity, but I believe it’s mindshare.
There’s, simply put, limited mindshare to go around for all the hours in the day and things to get done - so how can you possibly know that I’m worth the mindshare?
Venture Capitalists LOVE to give this advice
I can’t tell you how many times I’ve heard this.
If you want to get a meeting with us, find someone in your network who knows us and have them make the intro.
Or put differently, find someone who is willing to put their reputation on the line for you.
And that’s a poignant social capital flag!
Does it mean I’m actually trustworthy? Honestly, no.
But it’s a signal that I’m worth considering.
So can you win without trust?
As I write this staring out the window on a stereotypical foggy/gloomy/rainy SF fall day, I consider my walk over from the train station and a poster (ad) that caught my eye on a Muni stop.
It’s an ad for a database company, ClickHouse.
I’ve never heard of them (but TBH I’m not exactly deep in the latest of database technology).
But it’s the content of the ad that is telling. The core message of the ad is “Trusted by Vercel” (if you don’t know who Vercel is, sorry for the failed anecdote!).
- It’s not an ad about what they can do.
- What makes them better than the competition.
- What problem you have that they can solve.
We do databases. Vercel trusts us. Come on?!?
Isn’t that enough?
How often have we seen the wall of logos on pitch decks? On sales presentation? How many people have LinkedIn headlines that talk not about what they do, but the folks they’ve worked for: Xoogler. Ex-McKinsey. Ex-Netflix. And on and on.
Is that really the currency we should be trading on? Someone else trusted you first, so without even knowing anything about you, I should too?
Some of the most brilliant people I’ve ever met have worked on solutions or at companies you’ve never heard of. Their capacity to solve problems is certainly no worse than those that come armed with the transitive-company-trust-property.
And yet…
Coming back to my personal story that anchored all of this in my fascinating discussion with my newfound friend (FYI I would name him, but I didn’t exactly ask his permission to share what we were discussing, so I’ll leave it to him to weigh in on his own if he wants to!).
Let’s say the medical AI augmentation exists… because let’s face it, it will. It needs access to all the medical records to work its magic.
- No trusted intro from UCSF: I’d be willing to use it, but only after I reached that point of painful failure. It could be valuable, but I may never get to it.
- With the intro from UCSF: Sure it would be buyer beware, but I’d give it a shot. I’d be much more likely to realize value, but a lot less likely to truly feel the pain that made me appreciate quite how valuable it is.
Can you win without trust?
I believe the answer is yes, but trust me when I say that’s probably not the best path to achieving success.


