For the second part of this post, I wanted to talk about my first work experience - ever - with a fully virtual assistant.
Let me set the context.
In the course of my work, I was dealing with a vendor who was trying to arrange a meeting with us through his personal assistant, Amy Ingram.
So we were going back and forth to fix a date and time for the conference call.
I responded to the initial request as follows:
"Hi
Amy,
"Her" response was (using Billy as a pseudonym):
"Hi Malik,
I'm sorry, but that time doesn't work for "Billy".
How about Wednesday, Jun 22 at 11:30 AM EDT? "Billy" is also available Wednesday, Jun 22 at 3:00 PM EDT or Thursday, Jun 23 at 9:00 AM.
Amy"
When I read Amy's response, I thought to myself something like: "I told her that Thursday is open, so why did she say that doesn't work for the "Billy"?" But I thought something like "whatever" and just responded with:
"Thursday at 9 am works, thanks"
To which Amy responded:
"Hi Malik,
Thanks for letting me know.
I'll send out an invite once I've confirmed a time with "Jim".
Amy"
[Jim is my colleague; true name hidden for confidentiality purposes]
Eventually, it dawned on me: I wasn't dealing with a person, but a robot!
And then it hit me: the future is here.
The one thing that I realized through my interaction is how forgiving I was about the error because I thought the thing on the other side was human: everyone makes mistakes and so it's no big deal that "she" didn't get that I was open on Thursday.
This has a deeper implication on how "knowledge work" gets automated.
When we gauge machines for the ability to perform cognitive tasks, such as booking meetings, we should be careful as to how good is good enough for us to work with machines instead of humans. As we can see based on my interaction, they don't need to be perfect - they just need to get the job done.
In my interaction above, we were able to schedule a meeting and the fact "she" didn't understand that I had told her Thursday was open had no real consequence on the overall role "she" was playing. The meeting eventually got booked and that was that.
Ironically, I realized that I had already come across Amy at the DLD Conference in NY that had attend a few weeks earlier.
Dennis Mortensen (Founder of x.ai.), describes the challenge of setting up meeting and how this technology can solve the problem (profanity alert!):
His talk starts 5m47s:
As Dennis mentions, it's a very basic problem but at the same time it's so complicated. Specifically, the challenge with dealing with politeness: it's hard for AI to parse through this and understand the substantive facts that pertain to setting up the meeting. If we take a look at my response, we can see the challenges first hand:
- When I said I was out of town that the AI had to understand that meant I am not available.
- I did not include Wednesday as a date that was possible so that implies that I'm also not available that day.
- When I stated I was open on Thursday, I meant I was available all day.
So what does this mean for jobs? Are accountants going to be replaced by Amy one day?
It's actually shows the level of complexity involved in the most basic of human interactions and how much more complex it would be to train AI in terms of doing even the most basic of auditing procedures - at least for now.
Dennis actually made a good point about this in the Q&A portion of the discussion as it relates to jobs. The other presenter noted how he sees massive displacement as a result of AI; specifically in the truck driving industry. Dennis, on the other hand, was a bit more optimistic. He noted that what tools like his will do is essentially give assistants to people who don't have assistants. For example, the vendor we were dealing likely wouldn't have hired an assistant to help book appointments.
And I think that's where auditors and accountants need to actually see how AI assistants, like Amy Ingram, can help with automating those mundane tasks that none of likes to do.
No comments:
Post a Comment