Season 1 – Episode 5:
Arthur Tisi
Season 1 – Episode 5:
Arthur Tisi
LaVerne:
Welcome to Brilliant in 20, a new podcast from the Scoop News Group and Emerald One, where we celebrate the unique brilliance of today’s leaders and share their greatest lessons with you in just about 20 minutes. Hi, I’m LaVerne Council, CEO of Emerald One. My guest today is Arthur Tisi, founder and Chief Executive Officer of MeaningBot. MeaningBot uses artificial intelligence to help companies understand themselves, and therefore, create the best connection between them and potential employees. Prior to MeaningBot, Arthur launched two successful analytical startups, and has served in executive roles at several large companies, including being the Chief Information Officer for the Metropolitan Museum of Art, which I’m excited to learn more about that, and he has advised the Graham School of General Studies at the University of Chicago and Columbia University. Pretty swanky Arthur.
Arthur:
Thank you.
LaVerne:
But if that wasn’t enough, Arthur has also… And he doesn’t like us talking about this, but he created an AI dating site, which was intended to help people connect on a less superficial level. Now, I’m not going to ask anything personal about that, but sounds pretty interesting. He was also part of a private equity team that took a company into the public domain on a NASDAQ. It’s easy to say that Arthur is just your normal guy doing AI. I think that would be an understatement. I’m excited to have Arthur with me today, and let’s talk about AI. Let’s talk about you, Arthur. Thanks for coming.
Arthur:
Thank you. Thank you, LaVerne.
LaVerne:
I was fascinated by your focus on using technology to help people improve themselves. That’s sort of been a theme throughout your career, frankly. Using technology to help people improve themselves. Well, what made you want to do this?
Arthur:
Well, I grew up in the city. In New York, in the Bronx, South Bronx, and I always had an experience that there was various forms of inequality. In 2014, somebody shared an amazing statistic with me, which was, “If your name is Jose versus Joseph, you only have a 20% chance of having your resume considered.” So if you send in your resume and your name is Joseph, you have an 80% more likely chance of being considered simply because of your name. I started thinking about that, and I shared that with one of our other co founders, and we came up with the realization that if we could strip out the bias that’s inherent in names and socioeconomics and gender and race, and just distill down to who that person is, that we could have something that’s amazing. We could help people who were experiencing that bias to kind of figure out how to game the system to their advantage because they are capable, and that we can also help companies eliminate the adverse impact associated with hiring only one kind of group and kind of not really being aware of the protected classes that exist.
LaVerne:
That’s fascinating. Did you find that pealing away the outer layer really showed we were a lot more alike than different, or what did you notice in that process?
Arthur:
Yeah. There’s two ways to look at it, right? The first piece are the traditional skills that you bring to a job, whether it’s being a programmer, having certain core competencies, and those things can be somewhat impacted by the academic environment that you’re in. Right? Obviously, if you’re in a disadvantaged group, it’s going to be harder for you to access that kind of core competency in terms of training, but setting that aside, things related to IQ, things related to emotional intelligence, things related to soft skills, have no color. They have no ethnic predisposition. They have no gender.
The bias is inherent in building the models that look at these people. The bias is not in the people themselves. Either you’re strong in conscientiousness. You’re strong in openness to experience, or you’re not. Either… Now, there’s a second piece that kind of informs the nature nurture piece of your tenacity, and sometimes your tenacity is part of your personality, obviously, but then it’s also part of the environment. By and large, the only thing that can have an influence one way or another on an individual, is their inability to access the kind of academics because of their socioeconomic or some kind of other category of bias.
LaVerne:
It’s interesting because the area of bias and we’re going to go into this more later because you and I have had conversations about this before, and we know that people, even if it’s not intentional, can bring non-intentional bias into algorithms and into processes and into workplaces, right, so-
Arthur:
Even into the interactions, LaVerne. Do you know one person who has a… Who’s really… Hears, “Hey, I played lacrosse.” It’s like, “Oh, you’re my person.” That’s a bias, right? That may not be best for anybody, but to your point, the algorithms can inform the bias more than anything.
LaVerne:
Wonderful. Thank you. Let’s talk a little bit about MeaningBot because I want the audience to understand what you’re doing there because I find it fascinating, so give me a plain language explanation of MeaningBot.
Arthur:
We have spent years and years building models around words. There are traditional psychological tests that people take. Those tests define who you are from a personality perspective. What we’ve done is we’ve tied the relationship between your personality with the words you use, and we use close to 30 algorithms to look at the various ways you generate these words. Based on that, we can model the type of person you are. We use that to help you understand yourself, help you understand your strengths and weaknesses, and we use that to help companies understand who is best in a particular role inside their company, be it a sales person or an accountant, and who would best fit into a particular kind of culture, whether it’s more of a meritocracy or more of a democracy. Then the last piece is we can look at that over time, and then we can kind of inform not only the psychology of the person, but also their behavior based on the influences. What’s occurring during the week, during the month, during the year, that may influence them to act one way or another.
LaVerne:
Yeah, we are all our response to our environment, aren’t we? At some point, we all are.
Arthur:
Very much.
LaVerne:
Your tagline is, “Helping humans, not replacing them.” Explain that.
Arthur:
The fact is that the way we build our model for any particular company to decide whether Arthur or Laverne is good for the company is based on how they have assessed others. Right? Ultimately, they’re saying, “Okay, Lavern’s one of my A employees.” Whatever way they came to that is what we support in terms of how they came to that, so right there, we are looking at what they’ve determined and predetermined as good, and applying math to it. We’re not coming up with an ideal view of who that person is or who that person should be. The second piece is that once you get into an organization, you’re not isolated, so you work as part of a team. Now, you can work as part of a team remotely or together in the same physical space, but in that regard, there’s dynamics that occur that continue over time to iterate.
Then the last piece is… Because we don’t quite yet feel confident that the technology is there to automatically determine a person one way or the other, we still rely on the people in the human capital side, the human resources folks to help be part of the process, so we can deliver you some insights about people, but ultimately, we still feel like it’s part and parcel of a process to engage these people, to have them feel part of the organization, and then to make a selection that you feel confident in. 90% of the companies we work with use our tool exclusively.
LaVerne:
You are truly helping humans, not replacing them. You’re giving them sort of an understanding of what works for them in a very compact way that they can apply to a broader population. Is that fair?
Arthur:
That’s fair, and the way we do it is we don’t just say, “Okay, you’re a sales person. These are the traits of a good sales person.” What we do is we model that sales person within the context of the organization they’re working, right? Because you can work for one organization in sales, imagine a used car salesman versus a technical sales person. Technical salesperson for one organization versus technical sales person for another. There are different cultures there. There are different team dynamics, and so when we build these models, we build them based on the specific organization, and we’ve had really dramatic success in not only identifying the right people from an outcomes perspective, but also significantly reducing the attrition by hiring the wrong people-
LaVerne:
Arthur, I’m going to shift gears a little bit and you talked about it a tiny bit at the beginning, but you and I have shared a real interest in bias in artificial intelligence and bias in AI, especially as it relates to diversity and inclusion. What have you done to really address this in MeaningBot, and why?
Arthur:
We did it because it’s topical and it’s the right thing to do. I mean, there’s no sense building a model. When we do our analysis of individuals, stripping out their ethnicity, stripping out their gender. That only works if we have a diverse group that we’re pulling from to build our models, and so from a mathematical perspective, it doesn’t make sense to have only one group or another group because it’s just not going to work. Right? The math… I mean, we all understand that if you’re around only one group of people all the time, it skews your sense for who people are. The second reason we did it is because it’s the right thing to do, right? In an era where AI, where we’ve all heard the stories where companies have built models and then they had to stop it because it didn’t include women or it only included men. When you consider the fact that in STEM, only 20% of the people in STEM are women right now, what can we do to really strip out anything other than who they are at their core to make this happen?
The second piece is how do we help companies think about this? And we refer to that as an adverse impact study, so every single year, we look at all the people that have come through the system using our tool that have applied to the company, and then we look at those people from the applicant tracking system perspective inside the company and ask some key questions about their protected class, and we call that the four-fifths rule to make sure that no more than 20% of the people, who have come through, are being adversely impacted because they’re part of a protected class, and companies are really appreciative of that.
LaVerne:
That’s really important. Coming through at a time when there was no AI and certainly having people attribute things to you because of your race or gender, and assume that’s why you got your job, was always to me, very undermining of your credibility, of your right to exist. Right? The more you can really take that cover away and get to the basis of, “This was the best person for the job. They just happened to be”-
Arthur:
The first thing is… I mean, if you think about the industrial revolution, peoples felt the same way about machines versus horses, right? Look. Over time, what AI can do is two things, principally. One, it can make for a more efficient process in certain regard, right? If you think about a robot vacuuming the floor, great. I mean, you can call it AI, you can call it automation. That’s great.
When you talk about this notion of replacing a human being at their core level, that is years and years away. That has to do with what conscience is. That has to do with a lot of other questions that are very philosophical. For me, AI is the ability to A, provide more efficiency in a process with less bias if the machines are trained right, then is currently happening, and at the same time, get smarter over time about it, and then provide opportunity to people who may be in one space now to pursue something else. On the other hand, the time and place we’re in, Laverne. One of the biggest opportunities right now is logistics, warehouse workers, and truck drivers. That is not going to… What’s going to happen is things are going to shift and morph. Nothing’s going to entirely replace one thing or another-
LaVerne:
What I’ve always believed is that the machine will never understand the heart. Until the machine can understand the heart, we’re okay.
Arthur:
That’s exactly right. We refer to it as augmented intelligence. What we really provide is augmented intelligence, not artificial intelligence. I think artificial intelligence, in terms of what we’re trying to deliver, is a bridge too far. Right? The inability for one person versus another person to interpret your facial reactions. You need to know that person at a certain level before you can determine that that person is happy or not happy or smiling or sad or pensive or… It’s going to take a while to get there, and so what we do is we provide augmented intelligence, which enables for efficiency, accuracy, strips away bias, but also enables people to use the tools in a way that makes their job and their careers better.
LaVerne:
Arthur, that’s why I like talking to you. You make the mind science real, and… Now, I’ve got to get back to that thing that I was mentioning when I introduced you, and it was about the Metropolitan Museum of Art. I mean, I worked in technology all of my career, but never anything quite like this, and I’m just dying to learn more about it. We mentioned that you were the CIO at the Metropolitan Museum of Art, which is classically New York split into fine art and high tech, which is really cool. I go in there… You can stay forever there. How’d you end up in that role? How do you get that job?
Arthur:
I mean, the truth is-
LaVerne:
You’re Superman.
Arthur:
… a head hunter. I was young. I was still in my twenties, and a head hunter called me and said, “We’re looking to find somebody to fill the role of Chief Information Officer at the metropolitan museum of art. Do you know of anybody?” It wasn’t the kind of thing where they normally say that, and they mean you. They didn’t mean me, but I… Growing up in New York, I’m like, “Yeah. I know. I can do it.” It was almost like… After… I think it was 12 or 14 interviews. I got the job. I was in my late twenties, and it was an amazing job. To your point, most people never leave. My whole career has been very kind of different. Right? That’s usually a job you take later in life. I took that job early in life. I stayed there for seven years. Seven of some of the best years of my life.
For me, the Met represents 5,000 years of culture. Right? It’s got 20 curatorial departments, has 5 million visitors a year, and the content at the met is second to none. The stewardship at the med is second to none, and my opportunity to report both to the director and the president of the museum at the time meant that I interacted with every trustee, and there’s nothing better than being able to walk through an empty Metropolitan Museum of Art on a Monday.
LaVerne:
Wow. Yeah. I think that would just be… Man, that would be like home alone for me. That would be me running through the place and sliding in my socks, and [crosstalk 00:18:18] a picture. Question for you. How are you finding meaning these days with COVID-19?
Arthur:
Personally, I’m finding meaning in a couple of ways. The first is I’m doing exactly… Just following up on what we just talked about, I’m doing some inner work. Right? I’m thinking about the alignment between my child and my loving adult and my higher guidance, so I’m doing that. Right? And it’s always good to check in to do that. The other thing I’m doing is… And I’m manifesting that in the way that I’m working with people or the way we’re engaging, so I see it’s paying dividends. The second thing I’m doing is spending a lot of time playing music. Kind of challenged myself to learn songs on every instrument, and that’s been amazing, and it’s actually… There are a lot more people available to talk to you now than you would normally get when they’re in the office running around, so you can pick up the phone and talk to somebody where you might not get that opportunity, so it’s really always about shifting the focus. Shifting the focus. Positivity.
LaVerne:
Talk to me a little bit about your remote work index and how others can leverage that for their own good and understand what they’re best at.
Arthur:
There has been a shift. There was a shift from big corporate locales to people working more remotely before COVID. Right? I mean, we work… As an example of a space, we work with something that was growing because of the interest in people working remotely and also in the interest of finding the best talent, so that was one driving imperative, but basically what we’ve done is we’ve built a tool that can assess people prior to coming into a company, to help the company understand how well they will do working in a remote environment. What are the salient traits, both personality wise, and then what are the types of behavior that happen over time for somebody to be successful working remotely? I mean, I have three children. I can tell you that certain children can do well independently working remotely, others cannot. If you have to have somebody work remotely, what type of remote person is that? How do you provide them the resources to understand how to make them more successful?
And this remote index works very quickly. It’s very painless. It helps companies determine if people are good fits to work remotely before they start, and then it can also help assess people who are inside the company to help them get better at working remotely given the climate.
LaVerne:
How do people get access to that?
Arthur:
You just go to the meaningbot.com website, and on the first page there, there’s a whole description about remote work. We have videos and all kinds of content there, and you can download it and you can test it, or you can reach out to somebody and we can tell you more about it.
LaVerne:
Wonderful. Wonderful. I think I might have to test that out on a few folks. Arthur, I find you fascinating. You’re on this show because you’re one of my favorites, and I’m not ashamed to say that. I think it’s very important to let people know how you feel about them when you have a chance to tell them how you feel about them.
Arthur:
I feel the same way about you, Laverne. I really do. I mean, there’s a connection. Right? To your point.
LaVerne:
Yeah. I really appreciate you, and I appreciate you joining me for these first series of episodes of Brilliant in 20. It does me a great pleasure to bring people that I find gratifying and whole and real to other people, and so I think it is a great treasure and opportunity. It’s been nothing but a delight, but before we go, I have one more question. I know we’re all non traditionalist in many ways, and I know you live in different spaces, but since you’re working at home, I know you’re also at a desk. The question is what’s on your desk right now?
Arthur:
Laverne.
LaVerne:
Wow. Your guitar. That was pretty good. I was getting ready to vibe. That sounded like a little Steely Dan there.
Arthur:
Yeah.
LaVerne:
Some Segovia. Spanish guitar.
Arthur:
Yeah. We can do all of that.
LaVerne:
I was getting my move on.
Arthur:
Yes. Listen, I got some moves too with the… I’ll go play some piano for you next time.
LaVerne:
It sounds like a date. I can’t sing, but I try, but… Hey. That touches my heart. Thank you, Arthur, for sharing yourself with us today. Thank you for being part of Brilliant in 20. Just thanks for bringing MeaningBot to the world to make us all better. You do it every day. I appreciate you.
Arthur:
Thank you.
LaVerne:
[crosstalk 00:23:33] gracefully, so thank you, my friend. Thank you for joining Brilliant in 20, a joint production of Scoop News Group and Emerald One. We look forward to sharing our next episode with you, so stay brilliant.