AI is too PC, too privacy-cautious
Here’s what needs to happen
Meta.ai, Gemini, ChatGPT and other consumer AI offerings are too politically correct to be properly useful. You can’t get a worthwhile response on a question about medical science, a political or economic prediction, a question about a prominent person, or a question about race. The dataset used to train the model certainly has the answers, but the engineering around inference has been so cautious about offending people that the AI’s response is invariably a watered-down, unusable piece of blandness that will not serve us or assist in our mission, whatever it is. The political right is having a field day with this, but that’s not why I’m writing this article.
Privacy concerns have also caused the providers to use excessive caution around the model’s ability to learn about us — in two ways: 1) our prompt history, and 2) our social media profile. Imagine the power of an AI that has been trained on 405 Billion tokens, but that has also ingested all of the user’s social media history, profile, social graph and advertising response history. It would be able to help in ways that even the smarter consumers out there have not yet imagined or realized they need. I could ask Facebook Marketplace for coaching on how to word my for sale items, how much to charge, and which photos are best to make a sale while being truthful in my representation. I could ask Meta to coach me on Facebook Dating, explaining to me why I lost the attention of that woman I really liked, and why I’m attracting likes from women who are not in shape or in into an active lifestyle. I could ask Gemini to coach me on effective communications, having read all of my emails, calendars, g-drive content and advertising responses.
One big miss by Pi, the personal AI by Mustafa Suleimani’s former venture, Inflection.ai, before it was raided by Microsoft, what its outsized concern for privacy leading to its inability to learn from the user over time. Pi was not allowed to store and learn information about its users. Therefore, it was not able to build insights and develop a deep understanding about whether you’re depressed, desperate to find a job, unhappy in your marriage, worried about cancer, excited about your new venture, madly in love, or horribly lonely. For a personal AI, pitched to investors as a precursor to the personal robot that understands you, this appears to be a strategic error and a lost opportunity.
Grok, Elon Musk’s X.ai project, promises to avoid the pitfalls of wokeism as it trains on Twitter’s vast trove of data. It’s intended to give you the truth as it understands it, regardless of whom it may offend. While Elon’s political behavior is currently a bit unhinged, he has a point about wokeism being problematic when it opposes the truth and free speech.
Back to those robots.
Elon’s Optimus may win the early race to selling you a personal robot that is genuinely useful. At first, your robot may not be able to do much beyond carrying in the groceries, peeling potatoes and vacuuming the living room. However, if allowed to learn about you, it could build a profound understanding of your person and your needs, habits, fears, likes and loves. Your robot should be able to transform itself into a capable personal assistant that plays multiple vital roles in your life:
- Lawyer. Not one that can defend you in court, but this lawyer can advice you on estate planning, family law, labor law — your employees and your employer, tax law and more. It knows everything about your finances, your employees, your situations.
- Accountant and finance manager. Yes, it can read your bank statements and investment portfolios, your will, and countless other documents. It can prepare your taxes, advise you on investments, help you plan retirement and walk you through college saving, trust fund management and more.
- Social manager. Wealthy people may hire or use someone from the office to help manage complex calendars like beach home, ski home, yacht calendar and event planning. No more. You robot will do all this, including responding to inquiries from in-laws about the flat in Paris.
- Handyman. Your robot can repair that horrible scratch on the parquet floor that you and your son made when dragging the armoire across the floor. It can turn off the right circuits to take down the dining room chandelier and install the new one you just bought. It can install your Tesla Powerwall. The list goes on
- Gardner. Your robot knows all there is to know about flowers, trees, vegetable gardens, your lawn, irrigation system and gopher control. It can mow the lawn but then it can plant hydrangea along the lower tier of the terraced garden — which it built for you last week.
- Caregiver. Yes, including the heavy and delicate tasks of lifting an aging sick person from their bed to their wheelchair, carrying them up the stairs and helping them with toileting. No more extra room for the live-in caregiver. Robot Annabel just took over that job too.
- Nurse practitioner. Almost a real doctor. Yes indeed. The big obstacle still keeping AI from helping you medically is the lack of physical presence and ability to see, feel, hear and touch the patient in order to assess what’s going on. Any AI can ask the questions. Now imagine your robot becoming Dr Jones and taking your pulse, inspecting your pupils and looking into your mouth, then feeling your abdomen and sensing when you experience pain, then informing you that you do not have appendicitis but should rest and wait for that stomach muscle strain to heal itself.
All of this is impossible if privacy guardrails prevent the robot from learning about you by gathering your data and training itself on it. Which means that the large language model (LLM) in the background is also learning about you. This presents a gigantic problem for personal privacy and consumer rights. We have not yet run face-first into this problem, partly because politicians are so busy in cat fights about smaller issues — they don’t properly understand the tech and its implications, and partly because the tech isn’t quite ready just yet. We’ve been handwringing about original content and news media’s precious journalistic integrity being harvested by the machine, and presented as original.
We have not yet developed a moat around your personal data and the big data that powers the LLM. In the absence of a reliable shield for your personal data, the robots will either harvest everything and share it with the world, or they will be unable to deliver valuable services in our lives. Both scenarios are bad.
How could this moat be built?
- One approach could be placed at the edge, where all your personal data is withheld from the cloud and stored on your phone or post-phone personal AR device like your glasses or watch. This sounds pretty sexy, except that its encryption and anti-theft protection needs to be very strong, while allowing your AI full access to decrypt and use the data to serve you.
- Another approach — which Meta could adopt and thereby solve several of its popularity issues — is to host a personal data and metadata store in your social media account, allowing you to edit or delete items, control access and and see who’s using your data. This may be too complex for some people, and be left too open as a result.
- Your LastPass account could go on steroids and begin to manage your personal life’s metadata. Your health stats, your stress levels, your financial plan, your legal documents and your smart home and devices accessed by your robot.
Under no circumstances should your data become subsumed by the machine and pulled into the LLM. Our worst fears would begin to come true.
Under no circumstances should we lobby lawmakers to block big tech AI access to our personal life metadata. This would cause a multi-year delay in the advent of helpful assistant robots, and would give China time to catch up, then offer a cheap service which would likely have even less respect for our data privacy.
Are you looking forward to owning a robot that can be your house cleaner, chef, lawyer, accountant, personal assistant and doctor?