A few months ago, I started an initiative by blocking 7pm-8pm every day on my calendar for any employee to book a slot for a no-agenda chat. One of the most common areas of discussion in these chats has been how will AI impact our jobs? What direction should we go in? What knowledge and skills should we acquire? Based on these chats, I have summarized some of my thoughts below:
- Become a learning machine. Now, more than ever before, it is imperative that we all constantly be learning. A minimum of one hour a day must be invested in learning. This could be while commuting or working out or even during lunch breaks. Ideally, it should be an hour blocked on your calendar where you invest in yourself. And the best way is to just pick a topic – Azure Security or AWS Security or Container Security or IOT or AI red teaming, and go deep into it. Use LLMs to develop a training plan for yourself, and even get them to list out the right resources to get you going.
- Automate yourself. Instead of worrying that AI will automate your work, get onto it yourself. If you want to stay away from dealing with code, then use platforms like n8n to build automated workflows. If you like coding, then use Cursor or Claude to start with vibe coding and build from there.
- Consulting vs. Technical Expertise. For most people with 3 or more years’ of experience, an important question to answer is whether they should go more towards the consulting side of things or leadership or stay on the tech side of the shop. My personal view is that consulting roles will become less and less valuable. LLMs today can answer almost any question under the sun pretty authoritatively. If you’re not a technical expert in the specific domain you work in, you will find it difficult not only to demonstrate value to the client, but also to verify LLM outputs for accuracy and completeness. So subject matter expertise that comes from a deep technical understanding, combined with a broad understanding of risk, governance, and compliance would be the best mix.
- Management vs. IC roles. Similar to consulting, managerial roles are being eliminated and hierarchies are being compressed. If your role is of a manager of managers, the future may not be too bright. In fact, with one of our teams, we are experimenting with the concept of “Holacracy” – no manager at all! I find people becoming more and more independent minded and able to use LLMs as their mentors and coaches. Making the role of managers even more challenging. I often tell my teams that my 20+ years’ of experience can be a liability, not an asset.
- AI in everything. A year ago, I was using ChatGPT maybe once a day. Today, I have a plethora of AI tools and tabs that I am using throughout the day – NotebookLM, n8n, Claude, Canva, and my latest obsession is with Manus.im. I use Manus not only to do deep research, write code, but even develop PPTs and collaterals. Here are some of my example prompts to Manus these past few days:
- Audit my website [] from an SEO and LLM perspective and provide a detailed report
- Build an Agentic AI app to implement SOC2 compliance for Azure
- Develop a 4-page brochure for <>. Use this link and the attached collaterals as a reference.
- Find social media influencers in the field of <> and list out their top viral content pieces
NIST’s NICE Framework – Proposed Changes
And this brings us to the specifics of how cybersecurity professionals can leverage the AI boom as a career-defining moment. As guidance, we turn to NIST’s proposed updates to NICE, where the revised AI Competency Area states:
This Competency Area describes a learner’s capability to understand Artificial Intelligence (AI) systems and to use them in a secure manner that maximizes AI’s benefits while minimizing potential negative risks
Compare this with the earlier statement that said:
This Competency Area describes a learner’s capabilities to secure Artificial Intelligence (AI) against cyberattacks, to ensure it is adequately contained where it is used, and to mitigate the threat AI presents where it or its users have malicious intent
The previous description focused on defense: securing AI against cyberattacks, containing it, and mitigating malicious intent. The new, revised description is much broader and covers both sides of the coin – Securing AI and Leveraging AI for Security.
This is a critical distinction. Our future isn’t just about stopping the bad guys from using AI; it’s about enabling our organizations to use AI safely, effectively, and responsibly. We are moving from being security gatekeepers to being secure-AI enablers. This is a higher-value, more integrated role, and it requires a new set of skills.
Deconstructing the New NICE K&S Statements
The new NICE framework proposal gives a granular list of the specific Knowledge and Skill (K&S) statements that will define the next generation of cybersecurity talent. This is your personal study guide. The proposed statements paint a clear picture of where we need to focus. Our core cybersecurity knowledge is still the foundation, but we’re building new structures on top of it.
1. Securing the AI Supply Chain & Models
The AI model itself is the new attack surface. We move from SQL injection and buffer overflows to focusing on the integrity of the models our businesses are betting on.
- AI-K005: Knowledge of AI model vulnerabilities. This is the new AppSec. Think model inversion, extraction, and evasion attacks.
- AI-K013: Knowledge of data poisoning cyberattacks. If an adversary can subtly poison the training data, they can corrupt the model’s behavior in ways that are almost impossible to detect. This is a nightmare scenario.
- AI-S007: Skill in measuring non-explainable risk. This one is huge. Much of AI is a “black box.” Our job will be to quantify and manage the risk of a system we can’t fully explain. This is a massive departure from traditional, deterministic systems.
2. Understanding the Broader AI Ecosystem
Securing AI isn’t just a technical problem; it’s a socio-technical one. We need to understand the context in which these systems operate.
- AI-K003: Knowledge of AI bias types. A biased algorithm can create massive reputational and legal risk. We need to be the ones in the room asking hard questions about fairness and representation in the data.
- AI-K023: Knowledge of misinformation and disinformation vulnerabilities in AI systems. Generative AI can be a firehose for creating convincing fake content. We need to understand how to build and deploy systems that are resilient to being used for these purposes.
- AI-K036: Knowledge of the AI system life cycle. Just like DevSecOps integrated security into the software lifecycle, we need to embed security into the entire MLOps pipeline, from data sourcing to model retirement.
3. Leveraging AI as a Tool, Securely
Finally, we need to become expert users of the very technology we’re securing. This means more than just chatting with an LLM.
- AI-S002: Skill in developing prompts for generative AI systems. Prompt engineering isn’t just a buzzword; it’s a core skill for interacting with, testing, and red-teaming generative AI.
- AI-S006: Skill in identifying possible mistakes or hallucinations in AI-generated outputs. We have to be the ultimate skeptics, capable of stress-testing AI outputs to find the subtle (and not-so-subtle) flaws before our users or adversaries do.
Notice that foundational skills like K0735 (Knowledge of risk management models and frameworks) and K0683 (Knowledge of cybersecurity vulnerabilities) are still on the list.
Where to next?
My suggestions on how best to leverage this document:
- Read the Document: Go to the NICE Framework Resource Center and review the full “Request for Comments” on the AI Security Competency Area. Don’t just skim it. Read every new K&S statement.
- Participate: This is a rare opportunity to shape our own profession. NIST is actively seeking feedback. Read the document, form an opinion, and send your comments to NICEFramework@nist.gov by 11:59 p.m. ET on July 17, 2025.
- Build Your Plan: Use the list of K&S statements as a personal development plan. Identify three areas where you are weakest and find resources to start learning—the NIST AI Risk Management Framework is a great place to start.
Here’s a ChatGPT prompt that you may find helpful:
Take the knowledge and skills and task areas in this document, and create a 8-week learning plan for me. Assume, I have 1 hour per day and 2 hours on weekends for this. Provide me with relevant links online that can help me build my knowledge. Also, propose weekly exercises I can undertake to test my skills. Assume, I have read through and understood the NIST AI Risk Management Framework and have strong foundational knowledge of risk, governance, operating systems, network, applications security. And a middling level of knowledge on cloud security and a high-level understanding of LLM architectures. I am extensive user of AI enabled tools.
Author
-
K. K. Mookhey (CISA, CISSP) is the Founder & CEO of Network Intelligence (www.networkintelligence.ai) as well as the Founder of The Institute of Information Security (www.iisecurity.in). He is an internationally well-regarded expert in the field of cybersecurity and privacy. He has published numerous articles, co-authored two books, and presented at Blackhat USA, OWASP Asia, ISACA, Interop, Nullcon and others.
View all posts