What does Empath do, and what is it?
Empath is a technology that uses artificial intelligence to bring people's mental states to the surface and make them more obvious to others. It does this by looking at a person's facial expressions, body language, and the context of a video to guess what that person is feeling.
Can you explain how AI learns through exposure, experience, trial and error, and the science behind it?
The science behind how AI learns is based on the constructivist emotions theory. This theory says that emotions are not fixed and universal, but are instead made by each person based on their experiences and background. The brain is a prediction engine that builds emotions based on what it sees and hears. This knowledge is used to build artificial intelligence.
How does Empath validate the decisions it makes about the emotional states of participants?
Empath sets strict ethical boundaries for AI by making sure that there is a gap between its decision and the action taken, and that gap should be occupied by a human being. Empath provides a best guess of the person's state of mind but encourages the use of one's own judgment in taking the best course of action. Empath was built using a large dataset of quality data gathered from skills training sessions that used educational neuroscience to understand people's state of mind.
Why is it unethical to use facial recognition technology in certain situations?
People don't think it's right to use facial recognition technology in situations like surveillance, where people's feelings are read and decisions are made without their permission or consent. Also, the technology isn't used for interrogations or to find out if someone is telling the truth because of the bad effects it could have on people.
Marcus Cauchi: [00:00:00] Hello, and welcome back to the Inquisitor podcast with me, Marcus Cauchi. Today, it's a genuine delight and ever so slightly freaky to have my, as my guest Cauri Jaye he's, the founder, chief technical officer and chief science officer of a company called Sesh and they have a fascinating product called Empath. Cauri, would you mind giving us one minute on your journey to get to where you are today?
One minute on Cauri's journey
Cauri Jaye: Of course, thank you, Marcus. It's good to be here. So I have been working with, uh, companies for 25 years with technology, building products and, uh, being a CTO and basically moving them through using technology to solve problems. I mean, that sounds very generic, but, but it's a very specific thing that I've gone into which is
how do you take the real hard problems and apply every single technology that we have at our, our hands that we have at our means to solving that problem. [00:01:00] And I've done that with, uh, a number of different companies from, um, like national geographic to tiny little startups and that's being in a nutshell.
What does Empath do?
Marcus Cauchi: Excellent. Thank you. So tell us a little bit about Empath because I, I think a lot of people will obviously be a bit nervous about it. I personally love the concept because I fundamentally believe that a partnership between AI and human beings makes us more effective. So tell us a little bit about what Empath does.
Cauri Jaye: As you said.
I agree completely augmented intelligence is really what we need, where we have artificial intelligence and human intelligence working together to actually solve, solve problems. So, Empath in a nutshell is a technology which brings to the surface people's states of mind to make them more evident to everybody else.
So when human beings talk to each other, We guess, right? We look at each other's faces. We look at, we listen to each other's voices. [00:02:00] We check body language. We make assumptions about who the other person is. We think about their cultural background and make assumptions about that. And we use all of these things to guess what that person might be feeling at any moment, because we don't actually know.
So Empath is an artificial intelligence, which does exactly the same thing. It takes into account what other people, the, the people that it's looking at in a, in a video situation, it takes into account the situation that they're in, what they're seeing, how they look, how their body moves, how they express themselves, the larger context of everything, and then makes its absolute best guess as to what that person is feeling.
Now, the difference between a human being and Empath is that for all my years of living, I've only lived in a limited number of places. I've only met a limited number of people, and I've only seen a little limited number of, of facial expressions. And, uh, see I've been exposed to a limited number of cultures, [00:03:00] whereas Empath, because it's an AI has the possibility of being exposed to a much larger set than I do.
And therefore can, can make a much better guess a lot of the time than I can is to what somebody's state of mind might be.
The science behind Empath
Marcus Cauchi: AI learns through exposure and experience trial and error and so forth. So tell me a little bit about the science behind this. First of all.
Cauri Jaye: So the science is, is, um, is inspired very much by, uh, Lisa Feldman Barrett who has been doing effective sciences for a number of decades and has really pioneered in the, in the last few decades.
The idea of constructivist emotions. So what that means is that in the old days, we used to have this idea that, uh, emotions were universal. So you had a, a, a part of your brain that would light up when you were happy and your face would search show certain things. And it didn't matter where you were from who you were, what age you were, your [00:04:00] background or anything.
Everybody would have the same general expression and have the same part of their brain, that lights up. Now we've moved on and the neuroscience shows that that is absolutely not the case as to what is in your brain. And we've also seen that, uh, the studies that were done in the past that came to this conclusion were slightly flawed in that they, they put a standard in place, which meant that everybody was using that standard and therefore it biased the actual data.
So what that means is that we've understood now that, uh, emotions are not this fixed. Thing that we have, but rather this, uh, thing that we construct individually based on, on what we're thinking. So if you think about it like this, your, your brain is sort of stuck inside your skull and, um, it's, it has no idea of the outside world, except for what it gets through the senses.
Yeah. And the senses are not the five common senses we have. We actually have a lot of other senses sense of orientation, sense of time, sense of direction, sense of temperature. There's lots of different senses that [00:05:00] we have in our body. Some of them are complex and layered. Some of them are simple like touch.
And what happens is we take these sensors and it passes these signals to our brain. And, um, and in the past we always thought that the way that emotions worked was. There's a stimulus. So something happens out there in the world and it goes through your senses and your brain digests it, and then your brain responds.
That's how we understand emotions. However, that's completely wrong. That's not how emotions work.
Marcus Cauchi: can tell me more?
Cauri Jaye: Sure. So, so what actually happens is the, the opposite. What happens is your brain and your history and your background and all the things I mentioned about your, how you've been brought up and your religion and your, your, your culture and your parentage and all of these things.
They allow your brain to make predictions about what is going to happen next in the world. And then what happens is [00:06:00] your brain sense? Your senses take in the, the feed from the world. And your brain is either validated or invalidated in its prediction. So your brain is actually a prediction engine. It is it's, it is not, it, it, it constructs what it thinks it's going to feel, and then either feels it, or doesn't based on, on the stimulus that it gets.
And so that, that switch, that subtle switch is makes all the difference in how we, we deal with emotions and so Empath, um, really embraces, uh, this way of, of, of understanding the world and understanding the brain and uses that to actually drive how we build and construct the artificial intelligence.
Marcus Cauchi: So effectively, what you're saying is that there are all these built in intrinsic filters and biases.
They drive our feelings and our emotions and then the brain tests its prediction against what happens.
Cauri Jaye: Mm-hmm.
How often do we find ourselves making wrong judgments about the emotions that other people are feeling?
Marcus Cauchi: So how often, [00:07:00] in that case, do we find ourselves making wrong judgments about the emotions that other people are feeling? And that then sends us down a horrific spiral of disaster.
Cauri Jaye: I don't know about the, the spiral of disaster, but what I can say is that the percentage of people who make correct guesses about emotions is, is quite low.
We make errors a lot of the time. Now, of course, it's a lot easier if we are dealing with people who are similar to ourselves. So if we've been brought up in, in a similar way, in a similar. Or even in the same household, the same parents and so on, it's much easier to guess those emotions, because a lot of the assumptions we make behind what that person might be feeling at that point
is probably quite correct. However, the more that we are, uh, divergent, the more that we are encountering people who think differently from other parts of the world. And so on, the more difficult it is to gauge those emotions. And so the more we make mistakes and we make errors and, and this is, um, I mean, it's, [00:08:00] it's observationally.
True. We all know this. If you talk about it, but now in the modern world where we are all connected by video across the planet, and we are constantly dealing with people who come from completely different context, from what we know, we make these errors all the time and it's actually degrading communications.
Marcus Cauchi: Really interesting, cuz I, I suspect the potential for losing business, falling out has been massively increased because of the exposure that we are getting. Whereas if we are spending time with people, we become familiar with their mannerisms, their belief systems, their communication styles. So tell me this then.
How do you validate the decisions that Empath is making about the emotional states that the participants are expressing?
Marcus Cauchi: How do you validate the decisions that Empath is making about the emotional states that the participants are expressing?
Cauri Jaye: We have to set up strict ethical boundaries for AI, which means that when an AI makes a decision, [00:09:00] that decision needs to be a gap between its decision and the action that is taken because of that decision.
And in that gap needs to rest a human being. To be very clear. So what that, what that means is that there are very few circumstances when AI should make a decision and then take the action directly based on that decision. Now, people do that in trading. They do that. They're beginning to do that with, uh, with cars.
And that's why there's such deep training that has to go into those AI, but generally in business. And so on that gap, we still need a human being inside of there. And so we very much believe in that. So what Empath is doing is Empath is, is giving you a particular best guess as to what is what the person, the state of mind of that person is.
And in saying, look, use your own judgment, but mix that with, with what we think is a fairly confident choice to help you identify what, what is your best course of action? So that process. It's not about a massive, massive amount of data that [00:10:00] helps us build an AI that can do that. But rather it's about very, very good quality data that allows to do that.
So when we started Sesh, so Sesh as a company, before we built our AI, we are a, a lifelong learning company. We're very much dedicated to communications, intelligence and, and increasing people's ability to understand each other and communicate as we believe that's, you know, fundamentally one of the biggest problems that we have in the world today is that people don't understand each other.
And so we're, we're, we're we set up to solve specifically that problem in sales and in other areas as well, just generally. . And so when we set up the company, we wanted to build a really good data set. So what we did is we designed a series of skills training sessions. So they would teach you things like, uh, rapid decision making or Empathy or
how to better, uh, manage a team or leadership skills and so on. And these, these sessions were an [00:11:00] hour long or 57 minutes long. You would be in a room with six to 12 people. They were live. Remember those days when people could get together in a room. There was a 360 camera in the middle of the room. And the way that the sessions were designed is we used educational neuroscience to sort of set them up so that you could take people from those, those, that area of doubt and, and, and so on to an area of real trust and understanding about what you were teaching in 57 minutes.
So they, you take them on a journey which was prescribed, but they wouldn't realize it was prescribed. They wouldn't understand the journey that they're on, but in that what happened? And with a camera recording it, we could see exactly what was happening. We can see when they understood. We can see when they were frustrated.
They were encouraged within the tactile nature and the expressive nature of the sessions to actually say what they were feeling at different times. And so we really knew what was going on and what that did. We did that for a year, uh, before the pandemic hit. [00:12:00] And then obviously getting six to 12 people in a room became the worst idea you could possibly have.
And so we, we shut that down, but we, at that point we had gathered enough data that with that we could have very, very clear indications of people's state of mind. And attach that to what we knew about the person, what we knew about the situation, what their facial expressions, their audio, everything about them.
And from that, we could build a really strong data set, which we could then test with other data, right. At running videos, through the AI and seeing what it. Come came up with and then having the people in those videos validate or invalidate whether we were making correct assumptions. And so by using data in that way, and in a number of very, very smart ways, uh, which is led by, uh, that works is led by Kevin Woolery.
One of the other co-founders of the company. Who's a, our, uh, chief data scientist. We use that technique in other techniques to create really solid data that, uh, underpins the conclusions that our [00:13:00] AI comes.
What are the contexts in which using such technology is unethical?
Marcus Cauchi: Very interesting. Okay. So we've got strong dataset. We're observing real life human behavior. What are the context in which with talking about ethics?
What are the contexts in which using such technology is unethical? Let's start with that.
Cauri Jaye: So, what we've done is we, we actually created a framework for ethics and whenever we come across a use case, we put it up against that framework to measure whether we think it is an unethical or unethical usage of our technology.
So that's an, and it's something we've done pretty much, right from the beginning to see where it goes. So. First of all, one of the fears that people have that, that I, I like to show you is we've made, we've confused, especially in the law and especially in the US we've confused recognition with identity. And we kind of think of those things as the same thing.
So we think of facial recognition and facial identity as the exact same thing, whereas they're completely [00:14:00] different. So think about it this way, if I walking along and I look at your face, I've recognized that you have a face and that you are a human being. However, I have no idea of your identity and no interest in searching your identity or trying to find it right.
If I needed to, I would ask you to who you are and find your name and so on and interact with you. And you would volunteer that information. So in the same way, with, with Empath, what happens is Empath needs to look at faces to recognize that it is a face and be able to say, okay, I'm gonna look at this face and see what's going on, but it has no interest in your identity.
You don't need, need to tell us who you are. You don't need to. We, we're not one aren't interested in looking at that. And so one of the things that we do is we don't actually need to store your facial engram we don't need to keep your data in any way to be able to do what we do. We just need to do it in the moment to be able to recognize that you, from this frame in the video to the next frame, in the video, that you are the same person.
But we have no idea who you are. [00:15:00] And so, so that's one of the, um, the critical differences between those two that have, uh, created an ethical, uh, challenge. So what we we've done is we've, we've realized by looking at, uh, these different use cases and different ethical situations where they could be abused. And so one of those things is surveillance.
Where you could use this technology to, uh, recognize how people feel and then to make decisions based on that, that are, are not within the best interest of the person being recognized and without their permission. And so, for example, that would be like CCTV cameras, right? That this would not be an ethical usage of this technology because the person hasn't given any consent and they have no idea why you're using this.
So those are the kind of things that, that we avoid. Other things that we avoid at the moment because of the level of impact it could have on an individual is things to do with interrogation, things to do with, uh, trying to detect if somebody is lying in a [00:16:00] particular circumstance or not. Those kinds of things we feel are, are not the right place for this at, at the moment, potentially when the, the technology's further along.
And, uh, there may be circumstances where that is useful, but we would have to look at that when the time comes.
Marcus Cauchi: So if one were to use it, for example, in the context of a coaching session with consent, so that, uh, a manager was able to identify the emotions that were going on under the surface. You don't do it in real time at the moment though.
Do you do it through no,
Cauri Jaye: no. About a year from now. We'll be real time, but at the moment, um, it's we find it's actually more interesting to do post analytics where you can sort of see trends over time and you can compare between multiple sessions. So there are few places where we've found it extremely useful to begin with.
So one of those, as you say, is coaching and training sessions where the people are, uh, both people are, uh, it's very interesting. Empath is more useful where both sides are using it. [00:17:00] So it's not sort of one side using it on the other side, but rather both sides, seeing each other and understanding more deeply what each other is going through and sort of increasing the communication between the two.
And so coaching sessions are, are one placement. It's really useful team settings, where, for example, if teams have regular meetings uploading those team session, Are extremely useful use case because you, a lot of, of things that are, are sort of unsaid come to the surface and those things can be then dealt with in interesting ways and, and sort of resolved more quickly than, uh, especially as we are now, not in the same physical space.
Most, most of us, another place where it's very useful is on sales calls. We have a disclaimer and, and a, and a, and a notification that goes to the, the person in their invite so that they know they're being recorded and so on. And again, They're welcome to actually access it and look at the same recording as you are.
What's interesting about sales in particular for us is that sort of gone are the days of mad men sales, right? You're not [00:18:00] trying to convince people to buy the something that they don't need, just so you can make a buck there's way too many products in the world. Now, for that, what you're trying to do is find the right product for the right person.
And get them to understand that how it can be useful to them. And that really is all about communications. And so sales that aspect of sales, of running your sales calls through really allows you to start to see what effect you're having on the potential buyer and adjust yourself and figure out where are you miscommunicating?
Where are you not aligning and really see if you can get to those.
Marcus Cauchi: I think that a theme that's really very strong in my life at the moment is buyer safety. Mm-hmm and what's really interesting contextually about this is the potential to run a technology like Empath so that the buyer can see that the salesperson is playing with a straight bat.
As much as the salesperson is seeing that the buyer [00:19:00] is playing with a straight bat, because I think, um, too often in a sales environment, you find that people come with, um, expectations that, you know, there's caveat, em, talk. So buyer beware. McKinsey recently released a study that said 30% of business to business buyers want a seller free buying experience because in another study, something like 67% of buyers, uh, feel that sales and sales people are morally bankrupt.
Now that's not been my experience. I think a, a lot of that has been driven by terrible leadership, terrible management, measuring the wrong things that. Unintended consequences and the wrong behavior. And, uh, if you can create the conditions where buyer safety is paramount and you build everything around the customer, then you start partnering with them.
I think another area that could be really interesting is in the interview process. [00:20:00] Because a wrong hire. First of all, is hideously expensive in an enterprise sales environment that can cost anywhere between 35 and 125 times their salary.
Cauri Jaye: Mm-hmm
Marcus Cauchi: now that's a hell of a lot of money, but more importantly, these people are now going to be spending 8, 10, 12, 14 hours a day and making their livelihood with the organization.
Candidate and the employer have protection, they have permission and they have parity
Marcus Cauchi: And I think what's really interesting here is how do you create that level playing field? So that the candidate and the employer have protection, they have permission and they have parity. So again, I think that's a really interesting scenario where it could be used. Your thoughts?
Cauri Jaye: The issue that's that's been founded.
There was a recent lawsuit that happened with where people were using an AI to make judgments on, on potential hires. One of the biggest problems is that it was without their permission. And I believe, as we said before, consent is a very big thing you need to, to have consented to the, to this happening. [00:21:00] I definitely think that it can help from a, a communications perspective, but the AI shouldn't be making decisions and telling you, you should hire this person or hire this person.
But what it can certainly do is clarify some of the, the get beyond some of the things that might have happened just because of nervousness and the situation and so on, or somebody who's just really good at selling themselves. But, but there's some certain things that are missing un- underlying. I don't know that how much is gonna help with that.
You know, it's with somebody, um, being able to see. But I wanted to, to talk about the sales piece a little bit. One of the things that is happening in sales a lot of the time is because people are, have these targets that they want to hit. They're trying to sell at any cost and it sort of got this gamification aspect to it where it, you know, so you've, you've, you've called the person you've you've or you've emailed them and you've got them on call.
And that process of getting them from the, the sort of first cold touch to a phone call was already quite a lot of investment. And so at this point you're like, oh, I don't wanna lose them. And you know, so I've gotta get this sale. [00:22:00] I have to make it happen. But what happens is that a lot of the time you can end up pushing and pushing and pushing and that person, they go for sale and then have buyers remorse.
So that's, I think where that negative stereotype of, of sales people comes in. Whereas if you can very quickly ascertain whether this is the right product for that person, whether they would truly be interested in not, you can drop the investment that you've made much earlier on in the cycle and move on to people who it's worth spending more time with and actually increase your sales that way, where you have buyers who are, are, are much happier and don't have that buyer's remorse.
In the modern world, we, we run a reputation economy. And so that becomes 10 times, a thousand times more important because when somebody actually buys your product and enjoys it, they can just as easily spread that news as they can to spread the news, that about how terrible an experience they had and that reputation of, uh, you.
Your company and your product can all spiral downwards very, very rapidly in the, in the modern age. And [00:23:00] so this, this ability to be able to spot what would not be a good sale very early on is really important. And that's the kind of thing where Empath really empowers both sides to, to find those connections quicker and save on, on those wasted, uh, sales hours.
Marcus Cauchi: I think there's another element of this as well, particularly when you are, uh, taking a new salesperson on, then what you can, uh, I, I can see an application for this where you can use this to run the choreography of the sale and see how they, well, they communicate. And at what point they're creating positive emotion versus neutral or negative emotion and where they are getting in the way of the buyer and their decision to buy.
Now, that really fascinates me because the potential for speeding up onboarding and improving the customer experience of engaging with your brand and with the individual salesperson could be enormous.
Cauri Jaye: Marcus, [00:24:00] I'm so happy to how, how you say that there's a whole other half to the Empath product, which I haven't even talked about yet.
So about that then. Cause now , so one of the, one of the wonderful things is, as I mentioned before, we were an educational engineer, science based training company. We set ourselves up to do that as what I'm of our company, to be able to get the data set, build, uh, the artificial intelligence. So, what we've done is we've taken that same format that was 57 minutes long and shortened it to a, a sort of five to seven minute online interaction.
That's very fun and runs you through a, what we call micro trainings, which are very highly focused micro trainings. So in an interesting aspect of what Empath does is when Empath looks at your video, let's say a sales call. It can identify how you are managing yourself. It can look at you and say, there is a certain amount of positive versus negative speech or a certain amount of negative versus positive speech.
You are [00:25:00] using a lot of filler words. You are lacking confidence. You are talking more than you are listening. You are, you are asking fewer questions and so on. And because of this. It can then recommend to you very specific, very personalized micro trainings to help you adjust that behavior so that you can do better on your calls.
The really nice thing is that it is completely objective. So meaning that the next time you upload a call, it can check whether you've got better or not at that particular thing, and then train you more if necessary or let it go. And so it's, it's not just useful for onboarding. It's also useful for upgrading everybody on in the team.
Marcus Cauchi: Well, interesting. Cause I'm a huge fan of the conversational analytics tools like Refract and Gong and Chorus. Because they are incredibly powerful as self coaching tools. A lot of people in sales are reluctant because they consider it to be big brother, but it's anything. [00:26:00] But if it's applied well, the intent is correct.
Cauri Jaye: Mm-hmm
Marcus Cauchi: the objective is to make sure that you improve exactly. You get a better result, but more importantly, that the customer doesn't have to endure the crap that you're gonna pile out because let's face it. We've all suffered and we've probably inflicted it. And you know, I've been guilty of it as well.
Inflicting drivel on the prospect. And their time is way too precious. And the attention span of buyers is incredibly short. A lot of my clients historically sell into the C-suite. You've got about two minutes to make a valuable point. Or that conversation is pretty much over
Cauri Jaye: mm-hmm
Marcus Cauchi: and the cost to get there. When you take into account the emails, the telephone conver, uh, attempts it's 33 dial attempts to get through to one person often.
And if it's a senior executive it's 46 mm-hmm and each of those might have two levels of navigation taking two [00:27:00] and three quarter minutes. Um, so you could easily have spent three hours to get through to one. You gotta make sure that that time is exceptionally well spent and you gotta be timely. You gotta be relevant and you have to deliver value or else you're not getting a second call.
Um, again, when you look at the ROI on this for businesses, the potential, assuming that this technology evolves to the point where it's actually reliable, uh, and I'm, I'm gonna test this out over the next week. So I'm gonna, uh, then come back to you and, uh, it might be interesting to have a second call where I bring my, my own lessons of how I've butchered goals.
And I think what's really interesting here is the ability to deliver not only more value, but to accelerate the learning cycle and eliminate the waste because actually most companies could easily double, triple, or quadruple their revenue. [00:28:00] If only they eliminated the waste at the front end and in the middle of the pipeline, which is where billions of dollars a year or trillions of dollars a year are being lost.
What's positive and what's negative
Cauri Jaye: Absolutely. Yes. I completely agree. One of the issues. And now that we have a lot of analytics going on, where people are analyzing speech and analyzing in different ways to figure out what is positive, what is negative and all of these things, what we've found is that there there's, there's a point where they stop, where they say, oh, This person is very negative.
And so what do you do? Do you send them on a one week training course? Do you send them two hours a day to take online training and so on? It's just not viable, not, not in a highly competitive world. You can't, you're not taking four or five days off. You're not taking two hours a day off every single day because every single one of those hours is lost sales, lost time, actually in productivity.
And so that's why we found that if you could, at that point, not only, uh, say this is the issue, but say here's your solution. And it's [00:29:00] not a big solution that covers loads of things that are relevant to you, but here's a micro training. That's highly targeted the specific thing that you need to learn. And at the other end of that, five to seven minutes, you will have one to three techniques that you can employ immediately.
That's gonna change that behavior. So that you can see the benefits of it in that instant. That is where we wanted to take this. And so that's what we've done.
Marcus Cauchi: And so you're building a library of these micro trainings.
Cauri Jaye: Yes.
Marcus Cauchi: And are those available commercially?
Cauri Jaye: The only way is through Empath. So once you start uploading your videos, we start looking at them.
We start recognizing certain traits that you display in your, your sessions. And you get these specific micro trainings that are for you right now. We are, we have a, a closed beta. So we have a beta list that people can sign up for on getempath.com within that beta there's the flow of micro trainings will, will increase over time as we go.
But yes, [00:30:00] that's exactly how it works and it is commercially available.
What are you struggling with at moment?
Marcus Cauchi: Look, I, I know that time is short today, so, uh, I would love to have you back if you're open. Tell me this, what are you struggling with at moment? What are you wrestling with?
Cauri Jaye: So the only thing that we're real really wrestling with is that, uh, we have talked, we've gone through an alpha program with a bunch of partners.
And we have been on our beta for, for, um, a few months now where we have people using the system who are really communicative about what they need, um, and what they would like to see. And so our only struggle at the moment is we have a backlog of micro trainings that people would like to see a backlog of features that they would like us to pull out of, of, of, um, the various videos.
And they would like an extended range of states of mind. We just can't build them fast enough. that's, that's our biggest struggle at the moment is to keep up with, uh, with, with the, the demand. But the interesting thing is that our [00:31:00] artificial intelligence is quite young at the moment. And so like a, a young person, hasn't had a huge amount of exposure to the world and therefore makes a certain amount of guesses, but its guesses are quite, quite young and simple.
About certain states of mind. Now we still can pull a ton of value from that, but it's no not where we would like it to be. And so at the moment, we're running through a program where we're working with other institutions, research institutions, and so on to build up. The knowledge of our artificial intelligence, so it can deliver much, much more nuanced states of mind.
Most of the, the analytics you'll find out there are based on the old science, uh, Ekman science from the 1960s. And you've got like seven or eight emotions and that's it happy, sad, angry, confused, and so on what we are really interested in and what we're starting to bring to market is the real nuance behind those, those emotions.
And you'll notice that I tend to talk about states of mind, rather than emotions. Emotions tend to be just one state of mind. There are also for [00:32:00] example, decision states, which are really important to sales. For example, I agree. I buy in, I understand. I comprehend like these are also moments where you change your mind.
And so we're, we're also training our AI on those kinds of things. So we can help people make predictions about when those things are actually going to happen.
Marcus Cauchi: Some of the stuff that I've seen from Ekman. Sure. There, there are the basic emotions, but then there are all the nuances, but very few people have become exposed to that.
The last thing I saw there, you know, about 150, 160 different nuances of the emotions. And then when you build decision states on top of that, It just goes to, um, explain why human beings do not understand other human beings. You only have to look at how messed up your own personal relationships are. You live with someone and half the time you don't understand them.
So how you gonna understand a stranger? So, I mean, this is really, really fascinating, and I can't wait to see how this evolves and I'm gonna throw myself into the beater, [00:33:00] um, you know, uh, with a (unintelligible).
You've got a golden ticket and you can go back and advise the idiot Cauri age 23, what choice bit of advice would he whisper in his ear that he would've probably have ignored?
Marcus Cauchi: Tell me this, you you've got a golden ticket and you can go back and advise the idiot Cauri age 23, what choice bit of advice would he whisper in his ear that, you know, he would've probably have ignored?
Cauri Jaye: I think that's quite simple. Um, I would've explained what first principles are. The idea that first principles is the idea that you can, um, break any problem down into its very basic elements. And then once you've done that, you can build back a solution from answering those, uh, solving those specific, uh, problems.
So first principles is a really easy thing, but it's a really hard thing for people to do for some, some odd reason. And I would've wanted to explain that to, to myself back then. Understanding that has fundamentally changed how I approach, uh, a lot of things in my life. But specifically in this case, technology, the fact that's,
instead of learning by analogy, [00:34:00] which is what a lot of people do where they say, oh, I need a solution to this problem. Let's see what solution exists out there, right. Where you may get the right thing or you may not get the right thing instead, I would think, okay. I wanna find the solution. Okay, what's really the problem?
And then what's the problem behind that. And then what's the problem behind that and break it down into its really fundamentals. And then I can build it back up and actually find a solution that, you know, 95% of the time is better than anything else that's out. Already. And so I, I think if I could have explained that to myself back then that would've been an extremely powerful lesson.
Marcus Cauchi: First of all, fantastic piece of advice.
And that sounds like a good sales call because the problem people bring you is never the real problem.
Cauri Jaye: Mm-hmm.
Marcus Cauchi: And until you get to the root cause of the problem, then all you're ever gonna be doing is putting a sticking plaster on a cancer.
Cauri Jaye: Exactly.
Marcus Cauchi: Because you'll try and solve the symptom, not the cause.
Books and content around first principles
Marcus Cauchi: I normally ask, what's influencing you in terms of, you know, reading [00:35:00] matter, uh, video audio, but I'd love to delve into books and content around first principles. Cause I think that's something that people should go off and, uh, dig into. Can you make any recommendations there?
Cauri Jaye: The person who is popularized first principles, more than anybody else, uh, in the modern age is Elon Musk.
Right? Who talks about it quite often. Um, and you can read a lot of what he said about it, but what's, what's great is that I haven't read any great books about first principles. Although I have designed recently a micro training to help people understand first, first principles within the platform. There are a lot of, of, uh, really good blog posts and articles on medium and so on about first principles.
And there are a lot of great examples which go all the way back to Archimedes. And so you can, you can really start to understand it. So there's, I have not found any one particular book that I would recommend, but I, I do advise a good Google search read what Elon has had to say about it, and then look further a feel to, for people who [00:36:00] have broken it down further.
Another thing you can do is sign up for our beta come join us and you can take our micro micro training.
So how can people get hold of you?
Marcus Cauchi: Excellent. Well, I'll definitely want that micro training. Okay. So how can people get hold of you?
Cauri Jaye: So best way to get hold of me is, um, at the website, getempath.com. So it's a G E T E M P A T H .com. And, and you know that right down in the, in the corner of getempath, there's a little, our little assistant, Emma.
And if you click on her and, and ask her anything or ask for me that will get to me right away.
Marcus Cauchi: And apparently Emma is single -ish.
Cauri Jaye: she is, she has a cat
Marcus Cauchi: . I thought it was a Tamagotchi.
Cauri Jaye: It is a Tamagotchi cat. Absolutely.
Marcus Cauchi: Excellent. Cauri Jaye, thank you. This has been absolutely fascinating.
Cauri Jaye: It's really good to talk to you, Marcus.
And I would love to come back and, and talk some more about this.
Marcus Cauchi: Brilliant, let me go away and experiment. And I'll definitely [00:37:00] have you back and, uh, maybe bring you onto a round table with a couple of other players in the AI space that I really rate because I, I think that whole subject around ethics, human AI partnerships, and, uh, the potential to use AI for good, I think is really key and also bias.
Um, I dunno, have you come across a lady called Amy Brown? Because the work that she's doing, I find really fascinating. And also Rob Furley at white rabbit is very interesting. And also Martin Lucas, a gap in the matrix, cuz he answers the same question you do. Why do human beings not understand other human beings?
And one of the context in which I thought this could be really interesting is in product user groups. So to test your marketing and test your messaging, uh, can be really powerful, cuz there's an awful lot of money that's being wasted on terrible digital advertising. Now 265 billion a year is [00:38:00] squandered on adverts that get one click or less.
Uh, and that's 4.2 quad interruptions to people's days. Now being able to improve that I think could be massively useful commercial. And, uh, I'd be very curious to see what the likes of Google and Facebook are doing in terms of courting you as a partner.
Cauri Jaye: We actually did look at that as a use case and we did some experiments and they were quite successful and we liked it.
We decided to go to market with, um, what we did with teams and individuals like bettering themselves and bettering their teams. Uh, mainly because we saw it could have a broader impact that would have a much, uh, larger field of play and a much larger influence. By doing it this way then by sort of being behind the scenes and, and helping, helping people sell things better by using better language.
And so on, we just thought that this was, this was a more impactful way to go to market.
Marcus Cauchi: Excellent. Cauri Jaye, thank you so much.
Cauri Jaye: Uh, thank you. It's my pleasure.
Marcus Cauchi: So this is Marcus Cauchi signing [00:39:00] off once again from the Inquisitor podcast. If you found this conversation useful and interesting, then please do
subscribe comment like and share. And if you are the owner of a technology company or a CEO, and your goal is to grow your business and achieve genuine, sustainable, hyper growth without the wheels coming off. And you wanna build a highly engaged and highly productive team, and you want clients who stick with you year after year, then let's schedule a time for a brief conversation.
You can reach me at firstname.lastname@example.org or direct message me on LinkedIn. And if you think you'd be a good guest or, you know, someone else who would be, then please ping me a message and connect us in the meantime, stay safe and happy selling byebye.