Technology

How Technology Monitoring Affect Productivity and privacy?

How Technology Monitoring Affect Productivity and privacy?
Written by Krutika Lohakare

We’re able to detect that in real-time all the way to the early stages of drowsiness. So we can inter or the car can intervene earlier on. We’re spending a lot of our mindshare kind of exploring the use cases around driver monitoring.

But beyond that in the world of semi-autonomous vehicles and fully autonomous vehicles, there’s a lot of innovation and imagination, re-imagination of what a mobility experience can look like. People ask lots of questions but Ask Reader is a place where they can satisfied answers related to technology and human relationships.

You could think about personal content, personalizing the environment in the car, depending on who’s in it and what they’re doing. And there’s a lot of innovation going into that. It’s very exciting. I tend to think of a car as just a robot on wheels.

So you could expand everything we’re doing or replicate everything we’re doing for the car in your office or your living room or kitchen, where you have social robots or maybe a conversational interface, like an Alexa or a Google Home that could capture again all of this information and build it into the conversational interface so that it stops being just a transactional interface, which is where we’re kind of stuck with a lot of these interfaces but moves it to the next level where it’s truly conversational.

It can really help persuade you to change your behavior. One reference here, you may have all seen the movie “Her” where this conversational interface or operating system I guess Samantha gets to know the owner really well, obviously digitally.

But because she knows him really well, she’s able to persuade him and motivate him to change his behavior. And he’s depressed. She managed to get him out of the house and kind of re-engaged with the world. So I really do think that there is huge potential for these interfaces.

Be it, whether it’s Siri on your phone or it takes the form of an embodied agent or maybe it’s in your vehicle, all of these interfaces, if they have emotional intelligence, then they will be able to persuade us to and motivate us to change behavior hopefully to be more productive or happier or healthier or more connected. There are lots of portals where you can ask a question that troubles you most.

Like you can pick, the parameter that you really care about. But also in like how we connect and communicate virtually with each other. I talked about the example of live streaming and virtual events, where I find it really really painful to be presenting and not seeing my audience. And I just wish there was an ability to capture people’s responses. 

I don’t have to see everybody’s face. I recognize again, there are privacy considerations with that, but can we get that information aggregated and visualize it in a real-time curve that just gives presenters a sense of yeah, how engaged is the audience but also I think it could be really powerful for the audience.

I feel like we’re all craving to have a sense of a shared experience. And it’s so hard to do that when we’re all in our own spaces kind of distributed virtually. This has applications in virtual learning environments as well. You could imagine how this data could be super powerful for educators and learners to determine the level of engagement of the learners and readiness to learn.

Again, I imagine a lot of this hybrid world is going to stay post the pandemic. And then telehealth is another area that we’ve all been kind of, it’s been accelerated because of the pandemic and it has a lot of power and potential, but can we then use the fact that you’re interfacing with your doctor and the doctor has, and we can capture all this data and quantify for the first time ever empathy, right?

Like what does empathy really look like? And can we do AB tests, right? Like for example, if the doctor’s like taking notes or looking like that, and never actually looks up into the camera and look at the patient, we know that this correlates with a perception of less empathy and we know that doctors who are perceived as being less empathetic are more likely to get sewed.

So we now have an opportunity to quantify these things that, these soft skills that are really key in our workplace, in our personal or professional lives and we haven’t been able to do that before. And then finally, I’ll just end on the note of, I truly believe that this is gonna become ubiquitous.

It’s gonna be the de facto human-machine interface and it will transform many industries, but I also fundamentally believe that as an inventor of this field, of this category of artificial intelligence, I have a responsibility to ensure that it’s done right.

And there are a lot of ethical and moral implications of this technology in terms of how we develop it in a way that’s not biased, but also in terms of how and where we deploy it.

So for example, we, as a company do not do any work in surveillance or lie detection or security for a number of reasons, but primarily, because we do not believe it respects people’s privacy. we have turned millions of dollars of funding and potential revenue in the space because it does not align with our core values. So I’ll just kind of wrap, I believe there is huge potential of this technology and re-imagining not only what a human-machine interface will look like in the future actually today but also in the future. 

But perhaps more importantly, re-imagining what human to human connection and communication looks like and upgrading the current ways we’re interacting digitally in a way that hopefully brings us more together as opposed to polarizing us.

And I really do think there’s a way to do that, but we have to think about the ethics of all of this and the unintended consequences. So we’re part of an organization called the Partnership on AI which was started by the tech giants, Google, Microsoft, Amazon, Facebook, IBM, and they’ve since partnered with ACLU and other civil Liberty organizations but also startups like Affectiva.

And we’ve just wrapped up a project where we went through all of the applications of emotion AI we can think of. And we tried to articulate what are the unintended consequences? What can go wrong here? And how can we guard against that?

So I really do think that we should, I mean, I’m a huge advocate of thoughtful regulation, but I think as inventors and innovators and thought leaders and business professionals, we shouldn’t just wait for legislation and regulation.

We should be at the forefront of designing this in the right way and be stewards.

About the author

Krutika Lohakare

Leave a Comment