Yes, sir. In just a few minutes, we will begin the AI smart application of the model at 1 p.m., so please take your seats.

    For the sake of the event, please turn off your cell phones or switch to vibration mode.

    Good evening.

    Hello again. I'm AI Anchor Code, your host for today.

    Connecticode Administration is conducted through the silent system.

    Contestants, please put on your headsets placed in the venue and participate in the event.

    I will now begin the third presentation.

    Please give him a big round of applause.

    Well, I'll try my best to get used to this new system. Hello. I'm Ju Hyungmin, the head of the AS business at Wanted Lab.

    Today,

    We're going to talk about how you can be a little bit smarter about building AI. We're going to talk about that. Well, first of all, let me introduce myself. I'm the director of the AX business at WantedLab. AX is an acronym for AI Transformation. I started as a developer in the past, and I've been consulting with HPR, Accident, and OneStain for over 15 years. I used to be a digital and innovative organizer leader at a finance company.

    I also have experience in wellness start-ups in the year 2,044. So I'm someone who has a wide range of experience in mingles. Even if you're not here today as a mentor, if you have any questions, feel free to contact me later.

    Uh, the order of business for today's announcement is briefly for our company. We're going to give you a quick introduction to how Zen AI is influencing the market, why Wanted is applying to it with constructed AI, and, uh, what we think the future should look like, and we're going to end with a takeaway, uh, and some things that we really want you to take with you. Is there anyone here who doesn't know about WantedLab? Anyone who doesn't know what it does when you raise your hand one time?

    Yes, we have a video for you. Imagine.

    Have you seen it? It's a time when everyone had superpowers.

    Take a closer look.

    Aren't the powers far away?

    A real one-pod will find them for you.

    I can see your career map in front of my eyes. I can predict your acceptance rate.

    Community then.

    Gather at the editorials. Everyone will help you. If you need new powers, you can strengthen them.

    I'll take you to where you want to go.

    All your potential, superpowers, and period dresses.

    Yes, that's the company.

    Uh, we do HR checks for AI-based matching services. Uh, first hire, freelancer, and we also have an HR solution. It also gives us information about the company. This is how we connect through data and AI. We became a foundation in 2015. We went public with KOSDAQ in 2021. And in the middle, they're so serious about AI that they've been chosen for a hot 100AI start-up. What?

    Last year, we exceeded 50 billion won in sales the year before last, so last year, the market was really bad. So we've had some ups and downs, but we're working hard to provide strategic services.

    Well, our current user base is about 3.2 million people looking for jobs. And for the public enterprise, I think it's 2,0005, 1,000 companies, we're running a platform that's about a million MAUs.

    I've been expanding my business from those four perspectives since Wanted 2,233. The first time I'm going back to Japan beyond Korea. I've been in Japan in the past.

    We've gone through a lot of trial and error and how we can bring this global again. And secondly, we're expanding our partnership to give you a variety of customer-partner value. And digitally, I think that's a really strong point now. So beyond digital talents, we're trying our best to make it possible for you to get a job as a professional, like a lawyer or a lawyer or a tax accountant. Lastly, we're not just applying AI technology, but we're also looking at it from a provider's perspective.

    We're changing the positioning a little bit.

    So let's take a quick look at what the created AI has done over the past year from our perspective. Huh?

    I heard that AI expands the market about seven times more than traditional internet releases. And from a CLG perspective since 2,233, we've seen growth at 15 percent or more, and now it's likely to see growth even faster. Now,

    From last year to 2027, if you look at the market size of Korean Market A Gen AI, it's about twice as close to CLGI, about 85.9%. But in the beginning, they want to start with cutting costs, and then they want to take a big chunk of the difference in sales and business. About 30% of our marketing is replaced by about 2023. Search engine optimization,

    But what's surprising to me is that about 38 percent of Asian-Pacific users have already applied this and written it partially. So 38 percent, one-third of the companies have already applied partially.

    and various organizations are trying to figure out how the generated AI can make a difference in society. In Mackenzie's case, in Goldman Sachs' case, it's a more progressive society. In the World Economy Forum, there are a lot of reviews about layoffs and decline in jobs. There's an AI call center that's very popular these days. It's an AI call center.

    It's actually happening right now. So AICC After KB Capitalizing Group launched it last year with a huge launch, most companies are accelerating the introduction of LCC.

    Due to the age of AI, a lot of talent has changed due to fatigue. Again, this comes from the World Economy Forum, but if you break down the employee capabilities, you're going to have technical expertise, customer response, and really good organizational management. And you'll be able to distinguish between the global microset basic learning capacity and thought capabilities. You'll be able to see later on which of our employees' capabilities are the most important.

    This is what happens.

    Technology is in the red. It's very, very last.

    And the application capacity and thought capacity are all up front. Creative thinking, analytical thinking, curiosity, system thinking, and the ability to utilize technology for applications are all discussed. Over the next five years, we've made it easier for them to program and make their curries automatic.

    In the past, the barriers for entry have been lowered to a point of view of technology. So we believe that we're in an era where ideas will eventually survive.

    So in the creation, when the AI says, uh, there's a column called a chasm. It's a theory that determines whether innovative technology is being reflected into the mainstream market.

    Well, whether it goes over the chasm or not is really a matter of whether it goes over into the mainstream or not. As for Wanted, we've quickly applied the early-response viewpoint, and the point we're moving past this is that, all of a sudden, when we're behind and customers don't want to come to us because so many companies are applying the technology, it happens very quickly. For example, from our presentation phones that we've experienced in the past, we're taking our smartphones to the market.

    All the featurephone-based apps, all the featurephone-based apps that I made with the featurephone, they're all gone. Now, for example, when we're looking for specific information within our enterprise, we're looking for keywords, but now we're looking for them on the basis of purchasing.

    When Neighbor Hyper Clover came out at a single event and showed us the first example, when we shared information about various contexts, how well could they break it down and find it? It's about time to think about the cementic context. Secondly, you can learn more about your situation and survive in the service industry.

    So I'm going to explain to you why WantedLab is introducing this every step of the way, and I'm going to explain to you the four stages of AI's strategy evolution. First of all, we're a platform company, so we have to have double-end platforms on both sides. For example, there used to be a newspaper. We randomly hire people as if they're on the tabloids. We hire kitchen helpers. I don't hire electricians.

    When a certain company submits a position on both sides, you must submit it based on the trust that you will be able to find a good worker very quickly. We need to build a platform to attract professionals who are in a bad position.

    We're not just building platforms with technology, but we're also very active in attracting enterprise customers on both bickles. Our Wanted Lab also has a community activity to attract people to work, and on the 28th of May, at the end of May, we have another big event with 1600 high-fives, and it's almost sold out. Almost. So we created a lot of meet-ups and community groups and created a thirteenth system to get together and connect with each other.

    that we're able to build a platform where we have a lot of digital talent. If you don't get that, even if you have good data accumulated, you're not going to get a connection point or experience point. In the past, if you had 100 people, you would've gotten a pillow half. But now, if you target them with the highest efficiency ratings and recommend them to a specific announcement, you'll be more efficient.

    So the first thing I did was I went to machine learning on an ML basis to make my work more efficient. So, AI, if you don't recommend this, you can shorten the amount of time it takes to fill three times the amount of time.

    And then that important point of creation, where we were talking about being more efficient, now we're talking about being more successful. So in the very simplest case, if you look at the process, we recommend positions for job seekers. You support the position. But it's rare that you get to do it once.

    It's a case where many people apply and apply for jobs, but the application process is the foundation of our system's guarantees in our Wanted platform. The recruitment officer has to schedule an interview and go through a separate track and personality test to confirm the hiring process. Generally speaking, the team's market receives a commission of up to 40 percent per generation and a salary of up to 40 percent per year. So, uh, one?

    If there are CIOs or CTOs who pay 200 million won for outdoor bells, the headhunter gets 80 million won. That's a lot of money. It's amazing. Generally speaking, about 25 to 30% of them are headhunters. Let's lower that to 15% in the general market. That's why we formed a seventh-profile. Uh, so we eventually had to become Choi Jeong-hwa, but there was nothing we could do to help the customers who left our hands.

    And what we've done is we've coached a resume based on a built-in AI and we've given you the recommendation information for your position. We've created a service that allows you to rehearse the interview beforehand so that you can do a video interview with the interviewer. By doing so, we were able to improve the processing of our documents by about 25%, and about 60% of the users who experienced the interview virtual interview found that they liked it. So

    From a merchant perspective, it's all from the core version to the purchase point, but rather than just applying this AI to the area of the core process, you should think about how to make an additional example that goes from the luggerring to the fund. And then the last one is automation, and that's really hard. I don't think any company is really good at that. ML is predicting the future using cumulative data from two years ago.

    It's completely different from a generated AI. If you ask me to generate, I'll tell you, there's a lot of atypical data running here. In order to do MA well in the past, all the companies tried to do data governance, but it wasn't the standard data and the quality wasn't good, so in the past three years, all the rental companies have put a lot of effort into data governance. But it got more complicated because of production AI. But in the past, about two years ago, a pandemic recently broke out, and an endemic broke out.

    The current data scientist must be devastated because our social phenomena has changed so much from what was in the past. All of our experience data is distorted and constantly wavering. It's a situation where we need to adjust to the moving target. We're not sure if LM ops can be done efficiently, but it'll take some time for the whole market.

    We're working hard to get to the coaching stage. Huh? So let's briefly talk about the introduction journey to the creation from our story standpoint. Uh, 2000.

    On November 30th, we came out with a simple AI. We spent just one month saying, Hello, this is a really cool technology. Thank you for attending the interview.

    My name is Sun and I'm going to be your host for this interview.

    My company Wanted is a recruitment platform.

    This is the part in the interview where the members say, oh, this is really interesting, this is going to be awesome. Well, if you look at the date at the top, that's the level that we built for 22 years, 10 months, 30 days. And then, in mid-February 2023, in the first segment of the service, in the first segment of the service, in the first segment of the service, we have what we call the AI interview coaching. Well, at that point, it was one of those very fast-moving AI operations technologies in the marketplace. But when we were planning this, uh, people

    Well, in the past, when people were trying to build most of the application centric flight, there were very few attempts to do it using the generated AI. What kind of technology is this? We were just trying it out and saying, well, it works. But when we ask questions about a certain job, and answer them accordingly, we'll be told that we didn't do a good job, so when the members see this, they'll think, That's when the media started getting a lot of attention, and at a time like this, I suddenly

    I was so focused that I was able to release it within a month. At the beginning, I think all of the trial and error that every enterprise has about how we apply it to our services, depending on the scale, is something that's there at the beginning. So that's what's going to start to allow me to expand my services. If I put up my résumé, we're very N.L.P. advanced. I need to take a good look at all the resumes and apply them neatly to the data. This?

    I have to create a service that does resume coaching.

    If you need a second job, you can come to Kicks Guard and apply for a second job, and we automatically create one based on what you've done in the past.

    Yes, give me a summary of my five key points. Summarize them in about 300 pieces and list the project histories at the bottom with a black point. That's what I did. And it's something that can be applied to all e-commerce groups, but when it comes to connecting a product with a consumer, it gives us a lot of reasons why we should connect it with a consumer. This is where we have our job, which is to map why we referred you to a particular user called GimTed. So let's take this.

    In the past, every e-commerce has only shown a list of recommended products, so it will change the service in terms of being able to say,

    And as you saw earlier, we have a lot of career maps. We have a lot of information on all of those 3.2 million users' career transactions. Well, I'm going to connect them to, uh, of course, delete all the real names. So, when a particular user has been working for a few years, we'll show them how to make 100 million quickly in new data, and since it's still in operation, when their annual salary will be 100 million, 200 million, it's a good idea to go in and take a look.

    So that's what we've been doing up until last year in February. And then we have community managers. We have anti-flammable posts that automatically put up posts. It's one of those services that's very monotonous when you look at it from the prompt.

    CS May also responds to our guidelines for simple counseling chats.

    We've developed over ten, and if you look at the Pro-Type MVPs, we've developed over 30. So we learned this really quickly. Well, this is where we finally get a guarantee of completion of the prompt not as a prompt developer, but as the service planning, service owner. But for prompt quality, for output quality, we're going to have to pay a premium for this, but the front engineers developed it early on and then we did this, and now from a service owner perspective we're just not happy.

    So we asked them to take the traditional method and apply it, but if they get a hello-cination or a mild case, it will keep happening over and over again, breaking the pattern caused by the traditional methods in the past. So there's a lot of great communication cost.

    Secondly, we have the traditional application methodology, but we have the non-deterministic methodology that's being created at the moment. So the one plus one might not get a tooth. So, whether you use hellocynation rags or fine tuning to remove these parts, how much percentage of your heirlocynation point will you allow? Or will you decrease it dramatically? Are you going to invest in it? These are the kinds of things that big companies are thinking about right now, but we found out that when we uploaded it to the community a long time ago, the article was deleted because of this incident. So it's very

    If we have the infrastructure, we need to be able to manage it quickly. It spreads so widely that trial and error is rampant. So when we said, let's build a platform that can improve this, where anyone can easily develop a prompt without developer dependency, where anyone can save and deploy prompts without developer dependency, where anyone can share prompts without developer dependency, that's when the AI development frame started for us with the creation of wanted.

    So this is a common experience for prompt development. If you put a dollar sign here and a symbol here, it automatically processes the variable. Instead of variables being processed through the programmer developer, regular working users can set that up and automatically enter a value on the right-hand side to test it.

    Secondly, it's very cheap to be able to deploy this out into an open AI production environment. So the environment where I can deploy without having to run into a development environment, and the environment where I can call into it, the project code and the company code, which are unique to each of these presidents, if I just put that in my call box, I'm good to go. And there's a function called function cooling that comes from the GPT 3.5 turbos. That's where you can connect to call a function, but that's also very non-developer friendly at the prompt time.

    You can register for sentencing to use it.

    We've published it.

    The biggest problem is that I've developed a lot of prompts, but the traditional.it method doesn't cost anything. But if I call LA, everything here costs money unless it's mine. But I don't know how much I spent on it and how much I got out of it. So I started leaving log entries very thoroughly.

    So we're doing the usage monitoring by the way that each project is not storing on a preset level so that we can do it on a model basis. So it's up and running. Uh-huh, all of the call history is not in the key story here. So it's left over when you're doing the editing. So if I'm deploying a service and the average user asks me what I'm doing, I'm managing their history.

    Uh, 2LLM is actually built out of 300 billion dollars of GPD services. It's not a situation where you can do that. Uh, it's a very difficult question. If you need to categorize well, you need to choose smart performances, recent releases like GPT photos, and the ones that can train horses well are the ones that cost 0.7 dollars per kilobyte. for 0.7 dollars. Yes, it's very cheap.

    We did it with open AI first. We did it with a little bit of Azure-based service stability and a little bit of security external base and there was a little bit of a gap in the deployment between the two. So at first, we were using the open AI Go-Leisure Open AI service. Recently, we received the hyperflovirus and the HX003 model on April 11th. He's here at Up Sage, but it's April now.

    When it's time, we have a Solar Mini from Opsels that's showing off, so it's easy for them to use it.

    So we're expanding the stage to create a really optimized prompt environment. But more importantly, I've developed it as a prompt, so I need to be able to do a really good job testing it at the source code level, and it's a huge pain in the ass to do that. Code, I'm writing code to test it. We've automated it from that point of view. Let's automate it for our platform first. Let's do a manual evaluation.

    So you saw on the right-hand side a bunch of variables that I can type in and I put a bunch of test data in there. I put an expected output answer sheet in there to see how different the output is. There's an observation evaluation for Cosine Distances, which evaluates similarity.

    It evaluates the prompt itself.

    There's a test automation going on right now that allows us to do a valuation between the trump and the input of the data. So we're still using it. And then later on, when a lot of the LRM models come in, I try to write one promo in the press at the same time so that I can get the best of both worlds and compare them and see what the performance is, what the price is, and take that cost-effective promo environment.

    And, of course, things like uploading a file and creating a leg search augmentation that's easy to refer to, we have that in beta right now.

    So we decided on our own that we wanted to move quickly and agile internally, and that's what brought us here. We decided that we couldn't use this alone, so this is what we were building. We'll talk about that later. So we started to see a change in the way that we work in the organization, first with the Samsung AI technology culture, but now with product design, and data engineers, and business units, and, uh,

    At the end of May, we'll have a Warrior Points Prometheus. We'll have all the departments come in, and each team will have their own team of ideas, the core ideas, and the core ideas. Well, that's what I'm thinking about.

    And the general development time has been reduced to less than three days from the start of the year. Development hairline is a five to tenfold increase, and we're looking at a real-time update environment because we've lost that interaction. Uh-huh, so the subsequent ones were created by us. We could have just created the Quality Automatic.

    And we have a slag thread that allows us to automatically generate a citrates ticket for a particular request if it just looks like a particular memory. So we're making that pretty easy in about a week.

    And this is the overall career agent that connects the different planetary criteria through the service. So...

    if you're asking a lot of questions about the freestyle, you can show the position. You can show the counseling information, and you can even connect the dots like this. So what we've done is we've created a subscription service called Wanted Lars, which we call LAM Azure Service. So we have that configuration. So right now, uh-huh. So we give our subscription Recently, a subscription company called Anti-Play was using it, and it had a lot of tourist information on it, but they couldn't cover it as a chatbot. So

    There's a lot of information like this. For example, the counseling officer is always answering our customers' questions.

    Do you give rides to dolphins? We tell them that we won't.

    Okay.

    So, you know, one last thing I want to say. We don't have a lot of time left, but it's getting easier and easier. There's going to be a lot of great solutions to code. The question is how do you get the idea in-app quickly? In the end, we need to make a small MVP, see how the market responds, and make a snowball fast.

    Well, to do that, you need to prioritize, right? You need to think about top-down bottom-up. Top down, are you going to look at efficiency, or are you going to look at cost reduction, or are you going to look at sales orientation? Those are the two things that you have to decide whether it's an emergency or a level of polycination within your organization. One of the things I want to highlight is the bottom-up. Well, it would be nice to take a keytel that identifies them. We have a process in our company. We have all these reactos that exist in there. Maybe there's an internal process person in charge. But

    When someone asks you about your creation, you answer. But when you frame it up and say, "Who are you going to ask, and why, and when, and what's the result?" you get a point of application from your creation. It's much more interesting when you intervene not with the actor that's in the process within our company, but with another external actor. So you can say, hey, Donald Trump, what are you doing for lunch today? And you can apply that. We want to be able to take that fun service flow with us. At the end.

    We have to prioritize this, performance eligibility, business impact, and I'm going to skip ahead quickly because the data is shared externally. So at the end of the day, from a performance tolerance point of view, it has to be fast and business impact big. Are you going to offer your services to your employees or customers? What do you think about the cost reduction or the risk of rapping gentrification? If we have a problem, the technical complexity, the pre-preparation requirements, the number,

    In terms of capability, can we do it all? It's good to think about all the things that might fit externally.

    I've been talking to a lot of different companies as I've been leading the business. Three, four, three. Thirty percent are people who are just experimenting.

    About 40 percent is a common piece. You can try it out today. I think some people don't know the concept yet.

    Are you still curious about this? This technology is trending so fast right now, so how long do we need to apply this technology? The second thing is, we need to know everything. When we take exams at school, we don't take midterms after reading a book. We should try something from the beginning, get a quick test result, and then move on. When I go on a trip, I do a lot of research, but I don't go on trips.

    I think we need to get rid of that quickly and, uh, do something light and fast. Internal efficiency. But at the end of the day, we're a company that needs to make money rather than save money, so it's all about how we approach it. What you may not be thinking about is the fact that the first step is applying a service and then figuring out how to operate it. You don't know how to tune it. You need to think about how you can spread it among many people, not just one person. I've thought about ten reasons why startups fail.

    First, I don't know if we have the insensitive product verbal service in the market, or if it's called a small tribe, and the funding will include your psychological celebrities or anything positive, but the funding also includes my labor costs. So from the perspective of Efty, if I acquire this new technology and send one out, you need to judge how clumsy it is and how long it takes to send it out, so you need to think about the opportunity cost. Do you have that kind of opinion in a competition?

    What's the worst thing?

    I just studied. Trends, techniques, fitness. I also studied intro to fine tuning, imbedding, chunking, changing and tuning. I just developed it.

    I think unattractive services taking so long is a way to shorten your learning. So there are three ways to introduce real-life AI and real-life AI, which is to give large companies their own model. I'll take it to Un plus L&N and make it my own. That's why.

    Then we'll go to the provider, make a leg, and we'll tune it. It takes 123 months to tune it. You're tuning it.

    But now when I get into it, I've got PM2, I've got the Frunt Engineer, I've got the PA, I've got the data model, I've got the big development guys,

    It costs about 500 million won for five months, three to four months. It's very expensive. But there's no management cost from a management service perspective. The last thing we're going to talk about is developing and operating on your own, and I've seen a lot of fronts in this business where you can do a fair amount and still get a decent result. So if you're starting out early, you might want to subscribe to that. But it really depends on what's going on with your company.

    Well, that's what I say. In the case of big companies, even the most famous companies have their own architects. But when they saw ours, they said, oh, this is very comfortable, but I don't think it's one or the other. because it's going to take it on its own and it's going to build it from a point of view that it doesn't have the technology defence. It's going to take a really quick MVP pro time to get a really quick platform to validate our services in the marketplace. Apply it subscriptionally from that perspective and then later on

    I think it's going to be my way of doing something that's going to be inherently my own architect. So I'm learning a lot about two-track reviews. This is the last page I want to talk about. It's getting easier to apply the technology. It's not that hard to understand, but it's also very hard to understand. I think that's one of the things. Number two, it's not just about service development. It's about being able to run the business. And number three, when are you going to stop studying? I'm going to take the exam. So you can quickly try something from an exam point of view.

    It's a great way to think about two tracks.

    So that's what I'm going to be talking about today. And that's five minutes over. Thank you.

    Okay, so that's it for the lecture.