Q: When you think about the big vision — which still my mind is blown that this is your big vision, — of “I’m going to send a digital twin into a meeting, and it’s going to make decisions on my behalf that everyone trusts, that everyone agrees on, and everyone acts upon,” the privacy risk there is even higher. The security surface there becomes even more ripe for attack. If you can hack into my Zoom and get my digital twin to go do stuff on my behalf, woah, that’s a big problem. How do you think about managing that over time as you build toward that vision?
A: That’s a good question. So, I think again, back to privacy and security, I think of two things. First of all, it’s how to make sure somebody else will not hack into your meeting. This is Eric; it’s not somebody else. Another thing: during the call, make sure your conversation is very secure. Literally just last week, we announced the industry’s first post-quantum encryption. That’s the first one, and at the same time, look at deepfake technology — we’re also working on that as well to make sure that deepfakes will not create problems down the road. It is not like today’s two-factor authentication. It’s more than that, right? And because deepfake technology is real, now with AI, this is something we’re also working on — how to improve that experience as well.
Spoken like a true person who has not given one iota of thought to this issue and doesn't know what most of the words he's saying mean