That AI assessment thing for the SK job application, my experience

So, I was applying for this job with SK, and they mentioned this AI competency test, called SKALA. I’d heard about it, but honestly, I didn’t really know what to expect. The application said you didn’t need a specific major, which was a relief because my background isn’t exactly in tech, but they still do this AI thing. It felt a bit weird, like submitting yourself to a robot judge.

The SKCT and AI Interview Part

First off, there’s the SKCT, which I guess is their general aptitude test. It was a mix of logic, reasoning, and some other stuff. Nothing too crazy, felt like standard corporate testing. But the part that really stuck with me was the AI competency assessment, which they also call an AI interview. They mentioned something about it being a ‘deep assessment.’ I remember looking at the example games they showed on the site, like this one where you have to ‘make potions.’ It sounded so silly, but apparently, it’s supposed to gauge something about your problem-solving or how you handle complexity. I found a post online from someone who took it and their results showed ‘needs improvement’ for this potion-making game. That honestly made me a bit nervous.

When I actually did the test myself, it was all online, of course. You do it in your own time, but there are windows for when you can complete it. The whole process felt a bit detached. You’re just staring at a screen, playing games or answering questions, and you know some AI is watching or analyzing your every move. The AI competency assessment part involved a few different modules. One was the potion game, another involved some pattern recognition, and there was a situation-based judgment part. I tried to be logical and just follow the instructions, but there were moments where I wasn’t sure if I was playing the game as intended or if my responses were somehow flagging me.

The ‘Potion Making’ Game and What It Felt Like

That ‘potion making’ game was definitely the most memorable, and not in a good way. You’re given ingredients and a recipe, and you have to combine them to make a specific potion. It sounds simple, but there were variations, and sometimes the ingredients weren’t exactly what the recipe called for, or you had to figure out the best order to combine them. I kept thinking, ‘What are they even looking for here?’ Is it speed? Efficiency? Creativity? I tried to be methodical, but it felt like guessing a lot of the time. I remember checking my results afterwards, and yeah, that section was marked as something that needed improvement. It felt a bit unfair because I felt like I was trying my best, but maybe my ‘best’ wasn’t what the algorithm wanted.

Submitting Results and the ‘Import’ Functionality

After completing the tests, they gave you a result report. The weird thing was, they had this ‘import’ feature. It mentioned you could import previous results if you had taken the test before. I recalled that a friend had applied to a different company that also used a similar AI assessment platform, possibly the same one called ‘Jobda’ based on some forum posts. They had results from that, including the potion game, which was also marked as ‘needs improvement.’ This made me wonder how much these results actually matter, or if it’s just a tick-box exercise. If a ‘needs improvement’ on one section doesn’t automatically disqualify you, then what’s the point of stressing over it so much?

Uncertainty After the Assessment

Honestly, I’m still not entirely sure what to make of the whole AI competency assessment. It felt like a black box. You put information in, and an AI spits out a judgment, but you don’t really get to see the workings of that judgment. For the SK application, they said the results were part of the overall selection process, along with the interview. I guess they use it to get a baseline understanding of candidates, maybe identify certain traits they can’t easily gauge in a traditional interview. But the lack of transparency is a bit unsettling. It makes you question if you’re being judged on actual skills or on how well you can perform on these specific, somewhat abstract, AI-designed tasks. I’m just hoping my overall application is strong enough to overcome any perceived ‘weaknesses’ flagged by the AI.

Similar Posts

4 Comments

  1. The ‘import’ feature with the previous results is really interesting. It suggests they’re potentially measuring consistency rather than a single peak performance, which shifts the focus from raw skill to a more predictable pattern.

  2. The potion game really highlighted how subjective those ‘performance’ metrics can be. I’ve noticed similar patterns in some coding challenges – a ‘correct’ solution can look completely different depending on the specific approach.

  3. The ‘potion making’ aspect does seem a bit odd. I can see how that kind of seemingly arbitrary task could throw someone off, especially when trying to interpret the results.

  4. It’s fascinating how much the abstract nature of the potion challenge seemed to matter, almost like the algorithm was prioritizing a particular style of thinking over actual results.

Leave a Reply to chromatic_flow Cancel reply

Your email address will not be published. Required fields are marked *