

Funny thing about “AI skills” that I’ve noticed so far is that they are actually just skills in the thing you’re trying to get AI to help with. If you’re good at that, you can often (though not always) get an effective result. Mostly because you can talk about it at a deeper level and catch mistakes the AI makes.
If you have no idea about the thing, it might look competent to you, but you just won’t be catching the mistakes.
In that context, I would call them thought amplifiers and pretty effective at the whole “talking about something can help debug the problem, even if the other person doesn’t contribute anything of value because you have to look at the problem differently to explain it and that different perspective might make the solution more visible”, while also being able to contribute some valueable pieces.
You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.