THE #1 AV NEWS PUBLICATION. PERIOD.

Privacy and the Public Interest

One of my readers recently pointed out that universities are leading the charge when it comes to understanding the social impact of the technologies we build. I know this same reader was in the audience when I began to talk about how important privacy, security, and analytics will be as part of a “Future Workspace Panel.” No doubt, she had a smile on her face when I mentioned how technologists, particularly those who are working to improve our interactions in the workplace through AV, should start thinking about privacy and its importance. Her response might have been, “You’re about three years behind the academic community on this one.”

Technology has largely evolved within a capitalism petri dish, mostly undisturbed. The result can be companies like Facebook and Google who find themselves forcibly changing direction and mired in privacy scandals that could have been avoided. If you work in the AV space, you’re introducing technology every day to individual’s workspaces, classrooms, offices, and other areas where humans do human things. We are a homo-bureaucratis species that both loves technology and hates privacy violation with good reason.

Companies that build technology to enable fundamentally human-centered activities need to take special care. Whether you’re deploying sensors in diagnostics centers or enabling better face-to-face meetings, technologies that sit within our most personal spaces need to be the most sensitive to larger societal needs. Mersive (my company), for example, creates a product that allows users to have more effective (and potentially more fun) meetings but we’ve also introduced a product to help users better understand their meeting culture (Kepler).

When collecting data for our users, it’s incredibly important to understand the privacy and security issues. Fundamentally, I view these matters as a social contract between the data-provider (the user) and the data collector (us). The contract is based on a clear understanding that the data will directly improve the lives of the user in some way. Notice I said “user” and not the user’s company, or worse, some other company. This is why providing location data to Uber, for example, is acceptable to end users — because the contract is clear. I give Uber my location via their app, a ride arrives quickly and I get to avoid talking to a taxi dispatcher. Things get problematic when that same location data is used for something else like serving me ads based on where I’ve been, or worse when that data is used to provide value to Uber by simply selling it to partners. When the value to the consumer is lost or obscured, I’m no longer happy to give out personal data.

One of our areas of focus as we build Kepler is to make sure the value to the user and their data is both clear and direct. The Kepler team calls this a “one-hop” policy. That is, we won’t introduce multiple hops in the value chain to claim indirect value by passing that information to other providers. “Trust me that ad you saw made your shopping experience better!” Data->Kepler->Some Ad Company->You is a value chain that is both diluted and one hop too many. Of course, we don’t collect personal data but instead focus on statistical data related to how meetings unfold, where the best meetings are taking place, and why.  This information can then be provided directly back to the users in the form of suggestions for the best meeting spaces or even avoiding bad meetings before they happen via predictive analytics (“Looking to share from you’re iPad Retina in that meeting? Consider using a different meeting room with a higher-resolution display.”).

Of course, these types of experiences happen in the overlapping area of technology and policy. This is why I was excited to learn about the focus of public interest and technology at the academic level. Universities are banding together to take a close look at how privacy, public policy, and the technologies we are building will play a role in our personal and social lives, and I think we can benefit. It’s become a legitimate field that combines a deep understanding of technology with public interest matters that overcomes traditional barriers of the past. Often referred to as public interest tTechnology, this field seeks to create experts who can look at some of societies largest problems through a public interest technology lens. In the past, students would either emerge from the humanities (e.g., focused on city planning or energy policy) or technology (e.g., computer science or materials). This new field seeks to put computational skills in the hands of humanists and a deeper understanding of humanities in the hands of the technologist. Students increasingly crave this intersection and universities have begun to focus specific programs on public interest technology.

Several top universities have banded together to formalize this approach. Twenty-one different schools including MIT, Stanford, Arizona State and UC Berkeley have put together a group they call the Public Interest Technology University Network. It’s worth taking a close look at their programs. I’d love to see this become a trend that takes place in more companies.

Ideally, as AI, analytics and pervasive sensors via IoT become an ever-increasing part of our lives, I’d like to see that the petri dish those technologies are grown in is a solid combination of humanism and technology with a bit of capitalism thrown in as an accelerant. You can help. Send an employee to one of these programs, encourage your next lunch-and-learn to cover an open discussion on the social impact of your company, and encourage your product managers to pursue technology with balance. In the end, everyone will benefit, and your products will withstand value-judgments over much longer periods.

This column is reprinted with permission and originally appeared here.

Top