Link to home
Create AccountLog in
Exchange

Exchange

--

Questions

--

Followers

Top Experts

Avatar of Raju Raj
Raju Raj

ai model selection

I am a senior system administrator trying to secure my Career in this era of AI dominance and trying to leap into cybersecurity and looking to develop my skills and use the self hosted ai modle in my daily job (current work environment), which includes Windows Server, Exchange, Office 365, and Azure. and would like to make some money ( to purhcase my new home) side hustles while expanding my technical knowledge 

Currently I have, Windows 11 laptop with i5 processor, 16GB RAM, 200GB SSD
if required, I Can create a Linux VM with 12GB RAM onthe laptop with internet access. 

I need a versatile ai, that can help me in daily job, analyze logs and events, write powershell or windows batch code for me, prepare summaries and help me improvie my skills and also I would like to make some money ( archive my goals ) side hustles while expanding my technical knowledge
so, I in this context, I need help to find , LLM recommendations in table format with the following columns:


## Specific Technical Focus Areas:
- Windows Server administration and security
- Exchange and Office 365 troubleshooting
- Azure cloud services
- PowerShell and windows batch scripting  
- Security event analysis and incident response
- System hardening and compliance

- build a side husstile to earn some income - may be to create content etc.. or in my goibg to start my freelancing Career

so, how should I start with..
 

Zero AI Policy

We believe in human intelligence. Our moderation policy strictly prohibits the use of LLM content in our Q&A threads.


Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)🇬🇧

Welcome to Experts Exchange

 

Why not just use ChatGpt, Grok, Microsoft CoPilot if you want to Build you own try Llama however your reslources are very limited with just 

 

- Windows 11 laptop with i5 processor, 16GB RAM, 200GB SSD


Avatar of Raju RajRaju Raj

ASKER

Dear Andrew, I have very limited access to internet while at office.


I'm afraid your laptop does not have the resouces, LLM take up large amounts of memory and require fast GPU.


Reward 1Reward 2Reward 3Reward 4Reward 5Reward 6

EARN REWARDS FOR ASKING, ANSWERING, AND MORE.

Earn free swag for participating on the platform.


how about quantized versions - i see in yourtube ( adds ) that I can run quantinzed versions of LLM's with limited resources, 


Create one and try it and see if you think it’s usable!


ASKER CERTIFIED SOLUTION
Avatar of Shaun VermaakShaun Vermaak🇦🇺

Link to home
membership
Log in or create a free account to see answer.
Signing up is free and takes 30 seconds. No credit card required.
Create Account

you will likely be looking at small and local model (google it) if you intend to have everything on device. suggest you continue the chatgpt, bard, clatly.ai, gemini equivalent chats whichever you are already using. There are useful one that can generate and advise what is needed. But watch out for info that you going to put into prod, you should think thru the correctness.

 

anyway, https://ollama.com/ or  https://lmstudio.ai/ is decent to be used locally using different model. if you dont have GPU machine, it is not showstopper. at least have one with decent CPU with AVX (advanced vector extensions). model is all abt vector math. i believe intel core 4th gen (go for i5/i7) already support AVX, and AMD Ryzen.  


Optimally, a GPU. More VRAM, the better. If you have at least 8GB of VRAM, you should be able to run 7-8B models, i’d say that it’s reasonable minimum. If you’re not using GPU or it doesn’t have enough VRAM, you need RAM for the model. As above, at least 8GB of free RAM is recommended, but more is better. sort of like running your VM with 8GB. so what it means is you local machine may not run VM and studio churning concurrently. 

 


Free T-shirt

Get a FREE t-shirt when you ask your first question.

We believe in human intelligence. Our moderation policy strictly prohibits the use of LLM content in our Q&A threads.


Many of the use cases you describe, require the model to be able to use tools, and in some cases possibly custom tool code.  There are several issues that will prevent you from creating the kind of workflow solutions you're looking for using small and locally hosted models.


Issues

  1.  Latency - Your machine is too slow to get them to respond quickly, even with simple back and forth conversation, whereas even the free versions of Gemini, Claude, OpenAI, Grok, etc., would still be much faster and more useful/accurate for conversational AI that involves helping you to solve problems with code.
  2. Tool Use - Small models can use tools, but it is very unreliable even with very good prompting chains.  I have used agentic platforms like CrewAI, n8n, and Flowise, and even wrote my own Langchain scripts.. locally… to try to get autonomous tool use behavior with varying degrees of success.  Small models simply are not smart enough to reliably invoke tool usage on their own, given a task, especially when a task involves using multiple tools.
  3. Time - The learning curve to even get to the point where you can test a workflow (ie analyzing security data and generating a report), is significant.  Using small models during your learning/testing phase is going to slow you down considerably, because you often won't be sure if the problem is your code/solution or the model's capabilities itself.  Each iteration will take a long time to complete during debugging.

 

For learning purposes, I absolutely encourage you to download Ollama, and run different models locally to see what they can do.  Write code.  Test.  Debug.  There are tons of public repositories on GitHub that can get you going quickly and help you learn lots of new things.  However, if you're looking for actual useful and reliable AI solutions or agentic workflows, at some point you will need to switch to a paid API account using one of the frontier models.

All of that said, local models are getting better and better.  New models come out all the time.  We may get to a point where consumer edge devices can use specially trained local models to complete more complex tasks, without reaching out to a hosted API, but so far in my experience we aren't quite there yet.

Exchange

Exchange

--

Questions

--

Followers

Top Experts

Exchange is the server side of a collaborative application product that is part of the Microsoft Server infrastructure. Exchange's major features include email, calendaring, contacts and tasks, support for mobile and web-based access to information, and support for data storage.