AI-Powered Evaluation and Ranking Methods – Cyber Tech

I simply wakened with an concept that went from superb to alarming in about three minutes.

The core concept

AI is about to exchange many techniques that had been designed to show, check, filter, and fee folks. 

Educating and testing

That is the one which woke me up, and I used to be like:

Wow!—this might be so cool.

Instance: New worker coaching at an organization

So think about like somebody onboarding at an organization, and they should be taught the varied HR insurance policies, essential cultural ideas, how you can request sources, how you can get entry to completely different techniques, safety practices, and so forth.

At present we’ve somebody reside instructing these courses with slides, or persons are simply given slide decks—maybe with some video—and are informed to self-study.

Then they take a multiple-choice check—which is designed to be fairly rattling simple—they usually’re carried out.

That’s the present outdated approach.

The brand new approach

The brand new approach could be like a life-like AI avatar that has all of the information of what must be realized and absorbed.

And so they have a look at the individual’s background, the businesses they’ve labored at, the college they went to, and so forth.—they usually provide you with an ideal option to train the content material to them. However they’re additionally a world-class knowledgeable trainer and knowledgeable at creating curricula.

How? As a result of the avatar is predicated on the information of an enormous pinnacle mannequin like GPT-5 or no matter.

The brand new approach of testing

When it comes time to check, as a substitute of multiple-choice questions, you even have a dialog with this avatar. They current you with situations, and you’ve got a dialog with them.

They ask you questions like why or why not? They ask you what about questions.

Once more, it’s a dialog. And so they can go down varied completely different paths to ascertain that you just both do or don’t know the fabric effectively sufficient to soundly begin working.

In order that’s the teach-and-test paradigm. Fairly cool.

Scores can and can get a LOT scarier

However then we come to scoring.

What if—after this lengthy dialog—they may give you a security rating? Or a information rating about how deep your information is in regards to the firm insurance policies. Or how simple you might be to trick with a suspicious e mail.

However the extra I began fascinated with this—as I used to be nonetheless waking up—I noticed how massive this really is.

The larger test-to-rating pattern

That is the place I began getting scared.

Take into consideration an ultra-deep interview by an AI with full world-knowledge. Like a GPT-6 stage intelligence—say a 200 I.Q. (conservatively)—however extra importantly, an enormous understanding of what makes folks profitable in varied endeavors.

The Astra™️ Rating

Astra is the corporate with the most well-liked rating on YouTube and Insta and TikTok in 2026 (sure, it survived). Your Astra is a rating between 1 and 100, with a number of subscores.

Listed below are a few of the features of the check:

  • A 3-day (7 hours a day) deep interview with a full-sized AI consultant from Astra.

  • Data and Previous

    • Your private life philosophy

    • Your private life targets

    • Your understanding of math, physics, biology, historical past, economics, and lots of different disciplines.

    • Your understanding of human nature

    • A evaluate of all the things you’ve written on-line, each video you’ve made, and so forth.

    • A evaluate of all the things that’s ever been stated about you publicly on-line

    • Your previous, your traumas, your preferences, what you’re trying to accomplish in life

    • Your work historical past

    • Your abilities

    • Your previous relationships and the way they proceed or how they ended

    • And so on.

  • Eventualities

    • They then current you with all types of situations to really check what they realized above, and tease out extra info in your persona sort and strengths and weaknesses

    • In addition they use immersive tech to place you below stress and see the way you reply

  • The entire thing is completed with a full digital camera on you in your environment, in order that they’re observing physique language, facial expressions, and so forth. in addition to your precise solutions and your voice.

  • Well being biomarkers, taken from blood and saliva samples. Elective, however inspired. 😀 

The results of all of that is your constellation of Astra scores, which is rolled as much as on Astra™️ rating.

The scary half is how it will likely be used

What do you suppose might be extra predictive of success in a job? A set of arbitrary questions from a hiring panel, or your Astra rating within the areas of Conscientiousness, Neuroticism, IQ, Work historical past, Expertise rating, Self-discipline rating, and so forth?

The Astra AI will take all of your scores. All of your work historical past. All of your persona traits. Your fucking blood work. Evaluation of all the things that’s been stated about you. All the pieces you’ve ever stated. An intensive evaluate of your publicly-visible work on your complete profession. A deep persona evaluation of you complete previous and your life. Evaluation of your honesty out of your physique language and voice and facial expressions….and so forth.

And it’ll use its full information of what makes folks profitable, mixed with its full information of human psychology and a rising listing of different profiles it could possibly correlate with—and output a solution.

It’ll be the closest factor to a real evaluation of an individual that we’ve ever seen, and it gained’t be shut.

I’m an AI-optimist who sees good that may come from one thing like this, however this nonetheless scares the crap out of me. Why? As a result of for each firm that builds a benign-ish model, there might be 3 corporations constructing a dystopian model.

Let’s maintain going

Okay, so we talked about hiring.

One thing like this would possibly change a variety of hiring processes. Or Astra will merely get skilled on what the corporate wants and can create customized interview avatars for simply that exact function.

The purpose is, this stage of depth might be approach higher—when it comes to being extra predictive of success—than something that’s come earlier than it.

Now let’s increase into better society.

Individuals with excessive scores will show them proper on their private APIs, so folks can visually see them of their AR interface.

Somebody exhibiting off how nice they’re with their Astra/Omni scores

Take into consideration relationship. Take into consideration vetting for whether or not you need to begin a household with somebody.

There might be scores for:

These scores might be terribly deep and correct. However will they seize who we actually are?

Or a fair worse query—will folks even care in the event that they do as soon as they get widespread?

A common vetting mechanism

What this all begins to level to is a cycle of:

AI would be the greatest academics, as a result of they are often multi-modal, super-intelligent, and almost all-knowing—plus they will tune their instructing type completely for the scholar.

Ditto for the testing. It may be so pure feeling, and may pull out the truest and greatest efficiency from the scholar.

After which the scores. They’ll be so multi-faceted. So deep. And so damning once they’re low.

My concern

My largest fear with techniques like that is that it’ll take bias that already exists on the planet and put precise numbers on them.

You’re taking one look and suppose “not relationship materials”, however you don’t understand how you got here up with that. Properly, Astra can inform you. Right here’s a breakdown of 137 subscores that resulted in them getting a 38/100 in “do you have to date them.” Reply: No.

So damning. So closing. So gross.

It jogs my memory of dystopian sci-fi. It jogs my memory of eugenics. It jogs my memory of elitism. It jogs my memory of principally all the things we shouldn’t be constructing.

However we are going to construct this. I assure you folks have already began.

The issue is that present, legacy score techniques are so dangerous, and so crappy at being predictive, that these replacements might be devoured up by so many entities that want them to thrive.

Corporations want the perfect folks. Intelligence teams want people who find themselves regular and dependable. Single folks want somebody who might be an excellent companion.

Our morals run all the things till they don’t. And the purpose the place a nasty resolution can hurt you is strictly the place that line is.

Similar to AI itself, anticipate this. It’s not a factor that may occur, or may occur. It’s a factor that can occur—and doubtless already is.

Add a Comment

Your email address will not be published. Required fields are marked *

x