Stealing Bitcoins

In 2023 North Korea stole some 660 million in Bitcoins, there was just one problem, the was not anyone who could fence the asset. The North Koreans needed to turn the bitcoins into cash. Which is hard to do with even if we try to do that with 10 cents on the dollar. This point and others are cover in the book Rinsed by Geoff White. With cyber security work it is good to know assets, including information is fenced or laundered. Even when PII and PCI data is stolen, there is a short time period that stuff stays valid. But stealing $660 million in bitcoins, is impossible to launder. In the normal flow of things. Outside other governments and tech billionaires, who can cover that cabbage?

11.23.2015

The presumption of everything vs a total attack surface…

At the heart of monitoring apps are blocks of code in the form of rules. These generate events and alerts. If you review all the rules and try to map them out vs the total attack landscape, there are gaps. And that is not counting the security holes from the unknown things. The second issue with monitoring apps are they not linked. If, for example, a user clicks on a malicious link in an email and lanches some code, say a worm that starts to hop around, and tries to phone home, and tries to encrypt files on another computer the worm hopped to, those are like different alerts. Some tools may have a section or check box that will list all events for that user/computer, but not all. Even with these down sides to a tool, we still are drawn to MIR’s. Treating MIRs as if they were the magic solution. One IT security leader may the statement they could not relax on weekends, because vendor X was managing alerts. But if we research all the places that got breached, most had some hole hackers go into even though each had all the tools in place to prevent attacks. The truth is vendors can go just so far building a tool. The attack surface is huge. The limits of vendor apps and the scope of the attack surface should result is not. think we are safe from. attacks. We have to carefully map out what an app does cover and what it does not cover. Next know thyself. Know our environments and keep doing our own threat hunting. Build data analytics with our SIEMs. MIRs are generally not good at data analytics. Most of the time we lose sight what is going on when someone else manages events. MIR tools are not a get out of jail card, though many treat them as such. Using a MIR tool is okay but best when you do threat hunting and data analytics of your own environment.

11.22.2025

Building LMs..so may not realize that Pandas and Numpy are used in building and training LLM’s. It makes sense f you think what those tools do. Even outside AI we use pandas and numpy inside a jupyter notebook and python script to do data analyst on excel and csv files. They take the unstructured data capture and display it so we can use it. I think it kind of works alike way in LM’s. It is taking a request and displaying data in useful ways. That would train an AI model about using requests and outputs. It is training me too. I have a lot of data outputs in excel and csv files from monitoring app xyz who are good at matching a rule to create an alert, but bad for analytics. Most will not break down for us the analytics of monitored activity. That is where building tools with python, juypter notebooks, pandas, and numpy come into play. Most monitoring apps have some export function of what they capture, so it can be exported and the apply the data to the python tool sets.

11.15.2025

Learning projects….

Sometimes in the course of our work tasks, a situation comes up that we can turn into a learning project. That recently popped up for me when I had to do a poormans export of data from Rapid7 SIEM in the form of CSV files. This was needed for a complancy require for keep 365 days of data (accessed in the coming year). The task specifics were just to export the data to the CSV files and import them into an S3 bucket, and later the S3 bucket being a data source into CrowdStrike SIEM tool. The learning project became creating a server and using Python, Pandas, NumPy, and Python mini web services. Then someone can select a file and have it open in a browser. And being that the Python script has HTML code for the web UI, I am playing with a default UI where we can select a given CSV file. And yes, I started with Cursor to something, but the output was full of too much that did not work. I switched to a basic Python script importing Panda ( for the data structures and manipulation) where I just have a relative path to a specific CSV file. From that, I added other bells and whistles, asking the PyCharm app AI to suggest options to enhance things. But keeping a tight and specific set of prompts to try and prevent the AL function going too far and nutty. And the data analysis part stems from most EDR tools lacking good analytics. I want to know things like how many of each attacks are we getting vs others? Which is a down side to most managed IRs. They work and close 1,000’s of events, but we don’t really know what that means. In many cases we think we can stop paying attention to security events, because we are paying someone to do our IRs. So I am learning to build data analysis tools to learn it and create better information. And it is hard to determine value from a managed service, other than most of the time we can not pay attention to what they are going.

11.13.2025

Covey said to begin with an end in mind…

It was reported that BofA spent 120 Billion since 2016 on technology (how many of us begged for a new laptop?). Coming from the realities we have faced with IT budgets getting approved, that 120 B seems a fairy tale. Of the 12 Billion BofA spent last year, 4 Billion was for AI tech developments; with call center CRS things being a major part. Part of the expense is turning answers over to a bot can’t be wonky. And we know LMs that are too small tend to be less accurate. Achieving even 90 accuracy requires super large LMs that have huge times examples, and expensive fine-tuning and training. And because in the case of banks, they cannot just point the bot at one of the paid for LMs. The LM has to be directly related to their internal customer data. Few businesses can do what BofA or Chase, can do. GRC readers eyes are already twitching at the thought. Yann Lencun, formerly of Meta (called AI Godfather), talks about the limitations LMs, that some other thing will replace it to get real smarter than humans levels. And we know of the cost and compute to build LMs and just to get to 90% accuracy levels. That begs the question what should we apply all the cost of building AI tools towards, CSR-CCs, better services to the customers? What are the values of each held against the cost to create? Most of the CSR bots have underperformed, so far. A lot of that maybe the LMs are too small and lack the fine-tuning examples to get much past the old school call tree technology. We don’t need AI to check a checking account balance. If there is a bigger issue, these bots are less likely to find the issue.

11.12.2025

iRobot could not decide what was best for humans, in the book that is.

As much as the hype wants everyone to think AI close to thinking, there are still basic gaps. In different examples AI could not detect a programming app config that was generating an error when running some code. It could not find errors referenced in the browser when trying to play a game. It at times needed to be give programming reference documents. And often it is the human programmer that has to know which attributes to change, to make the code/app work. Then at times we need to change the LM because the first one has trouble solving the problem. There is still find a match vs solving the problem. In times when it is a configuration issue or an error that is buried somewhere, AI still. has trouble shooting like an experienced human. Where it does offer better productivity is building a lot of code quicker than a human can, but we still need the experienced human to fix the code and make it work. There is an unrealized opportunity to build better goods and services faster, though we tend to go for cost savings over more sales.

11.07.2025

As the old saying goes, there is no free lunch. There is a lot going on in tech. It moves faster and faster. That is such a factor that often we are drawn to vendor that promise to manage things for us. The old this will free everyone up to do the important things. There is always a root reality that the attack surfaces are so huge that no EDR/monitoring tool covers everything. The realities of time to develop tools and sell them at a price companies can buy, means the vendors cannot cover everything and hedge their bets what are the biggest hitters. Then there are things that they don’t do. Most rules that data are run against to find issue are singular to one thing, such as copying too many files egress. Even when there was an exploit in the form of a bad link a user clicked on and an array of activity are launched. If there are six different activities from that each are covered by different rules. And most vendors don’t ilink the rule or put all the code into a single rule to cover multiple events. That is a lot of code to plug and run without latency and too high computes. And that is just what the vendor knows to cover and chooses to cover. They don’t cover everything. They can’t, yet. The correct way to handle that is use a managed IR but still do our own threat hunting. The temptation is once we have a tool running with MRI, we can relax. Likely now the true case, but we hope otherwise. Besides it being a huge sell to leadership we have to spend X $$$$ on a vendor tool, we still need to do the same work. Maybe someday AI will write code with no bugs so there is no need to EDRs, but not today.

11.04.2025

Good systems, good designs, allow good processes.

Having these things are like telling the truth, you don’t have to deviate from general basic actions. It is when we start tweaking things to avoid what appears to be rough spots in a cycle. Good systems and designs and processes lessens the need to CYA. Easy to say harder to do.

11.02.2025

Know where you are….

Using PyCharm I wrote some script code and ran it. Good enough. But then I made some changes and moved some things around. Running the code again with the changes it was getting some funky errors. The code was correct,so why the errors? So I let PyCharm’s AI Chat proof the code and solve the error. The AI Chat kind of went around in circles. And doing an internet search on my own got me not closer the the solving the issue. What the problem turned out to be was I needed to right click the py script file and select Run to rebuild the config file. Which was a function of the coding app. The simple script was fine, I had just used the same project for a different script idea and never changes the variables in the config file. I am new to using PyCharm so needing to set the config (assume you do not set a default Main Config) was not locked in yet. In this case neither AI or the human solved the problem. I found a YouTube video which was a how to use PyCharm and the presenter covered some options with the config. I went back and followed those instructions, and the script ran fine. The AI Chat could not find a match for the issue, and got nuttier and nuttier trying to solve the error. There are likely less data points out on the internet to train the LM on. But maybe it will find this blog post and use it the next time someone has the same error. I do say that tongue in cheek, as they used to say.

11.01.2025

One of the questions we always face is when to handle a problem with a manual get it done method, a ongoing task method, or build a whole app? Sometimes we have a task that comes up which using some basic line command will do the job. We may struggle briefly with how we did it the last time, but it gets done. Then sometimes the need comes up often enough that we debate if we need to build a script to have ready. But with that we balance how long it will take to develop the script against the current task list we have to do. And it is easy to be deterred taking the time to build a script with 20 other things to do. Then there is the cases we really need a solution to handle the task. The need comes up so often, we debate the build or buy question. That may then get put on the project with and a budgeted consideration. Then came AI tools. That at first seemed to get a free lunch. When it is a simple task like find every server with a specific exe, it maybe faster to use the AI tool than writing a script. That seems to bridge the gap between manual actions and building a basic script. But even that is not yet perfect. AI built things tend to be over built and are often generated without validation and injects junk the LMs found and included in its body. A reality of life. And as it stands companies now have less experienced staff that can correct the AI script or app. The bottleneck becomes a combination of compute/costs and a limitation of human engineers. Many organizations that put AI assistants out there confess they don’t know if that is helping for not. The ranking of the project list even with AI is a set of choices. And as S Covey wrote, start with an end in mind. That helps determine what to tackle in what order. Each need and problem is a set of choices. Some that create more problems and choices. Remember, there is no silver bullet and nor free lunch.

10.31.2025

There maybe a hidden value in using better tools in the organization by experienced staff, over AI will do everything for you. There SIEM platforms that will create custom query strings for you, that do not work, and you will need to debug the code string. One SIEM I use I don’t use its AI assistant because I can run the code string faster myself. There are two reasons why the SIEM AI tool only has a 20% success rate generating log search code strings, one the app does not perform a validation of the code (the lag on the back end to do the validation is too great), and the second is there are not enough code strings examples the language model. It may be better at this stage of things to have engineers write the code and let the AI part for the fine tuning. For example, if you use PyCharm to write a python script to get the base script built, we can then run the AI options to hunt for more bells and whistles. This vs letting AI tools create the script and then a human debugs it. I have tried both ways and I find if I build the base and than let AI fine-tune it, I end up with less code and a tighter script. You may not want to have a lot of error checking or and/if statements. Let’s say we are just to copy CSV files from a server to an AWS S3 Bucket, and we do this a lot of times, over and over, but we want a script to run this. Putting this into practice if all I did was use a simple AWS CLI command, it is a short like of code. If I wrote the python script, if is 10-20 lines of code. If I asked a tool like Cursor AI to created the script, it took 30 minuted to create a multiple pages script and hours to debug its script. And using a tool like Pycharm, it detects mistakes as we write the script. It maybe just me but the middle options was the best one. And in this example the need was not an enterprise one. It was more than a sysadmin running a CLI command. So writing a tight python script and letting AI tools beef it up, was the bed Goldie slept in.

10.28.2025

As much as everyone touts prompting in AI, the real story is in the LLMs and computing costs. In examples where companies have built programming apps which work very well (to a point) largely because there was/is a ton of programming data on the internet for it to LLMs to mine. The trick seems to be building our own LLMs with our own data. Bloomberg News built their own, and at their own expense, but without a real ROI. We can build only so much enterprise apps using 3rd party LLMs. And in most cases we lack a large enough data set to train our own models to be accurate enough of the time. At least until simulated data is workable. And most of us don’t have the compute and money to do fine-tuning a model over using a RAG. Plus, many AI solutions we throw out there don’t verify it works. A number of SIEM products will build a query string which it can not run. There is a problem with verification AI created solutions actually work. That is a step in the whole thing, making sure it works. Most companies that use AI tools for CR tasks say they don’t know if it is helping or not. Which is an issue to overcome when the tool is a cost savings thing and a not a sales generator. A cost center trying to save cost.

10.19.2025

Prompt Programming

There is a reason that there are AI programming apps that do wonders with providing code, the LM’s used with them mined huge numbers of scripts, coding example, and documents. It is one of the test cases that worked. As will any function where there were a lot of data on the internet. There are cases where the mining data LM’s are smaller and have more issues with getting things right. And that presents an interesting problem for an enterprise, building apps to find and use internal data. If we want to build our own apps using AI tools, the gap is with internal data. We generally do not want a commercial LM to ingest our data and incurring the cost to build our own LM tends to be too costly to do, and will be small. And too small a model and it suffers from being wrong too often. Good models need a lot of examples to increase how right its output. None the less there are still good uses for building apps with AI tools. I built a web page using Cursor AI that can run 10 pentesting app native on Kali Linux. Things like recon-ng and Nikto. It is mostly for the IT staff to use. Easier than giving each engineer a laptop with Kali on it. In general I am starting to create apps that fill in tech gaps. Each renewal season I lose some tool and then have a gap. Such as losing vulnerability app that made it easy to find servers that are vulnerable to a new CVE. But I am building a CVE_scanner that runs python scripts, using an AI programming app.

09.19.2025

There is not grand design and no one is minding the store…

In the book I Robot, in the end the robots seem to be starting to malfunction. They were tasked with acting in ways that were in the best interest of humans. The robots went astray because they ended up not being able to determine what was right for humans. We want instant gratification. Have our way. Not plan for the long term, but be bailed out any way. In I Robot, the evolution of technology to meet the needs of humans, and do no harm, ended up failing. The is in part because we don’t want to do what is in our best interest. We will choose quick wealth even if that is bad, and worry about the negatives later . Technology cannot bridge that gap in human behavior with its many conflicting patterns. It cannot be the high court of choices. But as a moral authority, maybe not. All tech at its root is math. A bunch of 1’s and 0’s making yes/no, AND/If/else, creating strings , set of choices. AI is trained to give a “best” answer. It absorbed Terabytes to the power of 1,000,000’s, to get to that best answer. The technology has to address ethics, religion, moral questions, issue of scarcity resources, law, etc. All coming from math and (with AI) Tokens and pattern matching. Can any mind, including an AI one, know the best answer to every question or are we just focused on winning the money race to be king of the AI tool set hill, and worry about the rest later.

09.12.2025

Even in tech economics applies. There are just not enough customers who can afford to buy a $2000 phone. With everything costing more and wages staying stagnant for 20 years, a product can price it self out of the market. Basic economics say when the price of a product gets too high and there are alternate options, consumers switch. And sometimes pricing is more a factor of equity price needs than buyers in the market. A $1299 priced phone may be more inline with consumers. The $2000 phone is likely more about equity needs.

09.02.2025

One of the biggest challenges to adopting home grown AI solutions in an enterprise, is integration into the current system structures (that is aside of building security systems). From traditional apps and systems operation differently, to just the lack of skill sets to do so. That is speaking on an enterprise level. We can still create different one off apps to find things like end-points without an monitoring agent (so many security apps lack a process to do that), or other one off type task. Plugging the AI app to hunt for agentless computers in the network is the hard part. Since AI models are trained by example and creating AI apps and integrating them into trad-apps-systems, is it self a new thing, there is a gap. The analytics currently show that 90%+ of in-house AI app project do not make it into production, and for reasons of the hardship integrating into the current enterprise systems with the AI ones. The current trad-apps are programs that run in a set repeatable patterns and AI is more dynamic. Without the examples in training, the last mile is the hardest.

08.29.2025

Going forward with AI security will have to pay attention to AI drift, input, and API type things. A different set of monitoring types. Unlike traditional software where there are more predictable process and activities, AI processes can go different directions, and drift. There will be different monitoring and controls for that. And since AI is asked to do things, there will need to be monitoring and controls around that too. Things like stopping an input to unpatch something or add bad data.

And if AI does something bad, do we have to call HR?

08.26.2025

Low cost VPNs

Low cost VPN service come with a lot of risks. The travel path maybe starting in Russia and route through Amsterdam, before hitting your org. The risks include weak encryption, malware and adware, data logging, DNS and IP leaks, and more. The best options for protecting the organization is to use MFA, use IDS/IPS tools with block type rules, network segmentation, and patching everything. And of course monitor for successful connections, not just the failed ones. Low cost VPNs tend to get overlooked as an attack. surface, but is one that need to pay attention.

08.15.2025

An old engineer saying

They use to say there fast, good, and cheap as choices when building something; and you can only pick two of the three things. Something can be good and fast, but it will not be cheap. It can be good and cheap, but not fast. Sometimes the business part of saving cost is the first and foremost factor that determines this. AI may build software quickly, but there are still bugs and security holes in it. But the need to get your product out there fast is something hard to avoid when you want good and cheap. And it is fair to say that using AI to build apps address the cost saying side and not support and deployment things. Many things now are the devil of good and fast, but cost is the king. Plus it takes time for buyers of software to move from a Salesforce to an AI created option, just as it does when we move from one app platform to another. And that includes the average 3 year contract cycle. The creator may make their choices or fast, good, and cheap, but, the buyer likely will lag behinds. The creator is chasing market share and the buyer lags behind that as well.

07.31.2025

Servname not supported for ai_socktype

Everything with computers is a clue. If you discover an FTP server out there example ftp.cutepugggies.com and run dnsmap, you will likely get some sort of IP address output for the domain link. Then if you try to open a link to that FTP server and add -A, you may get the ai_socktype error which is in reference to TCP and UDP socket.sock.stream and socket.sock.dgram, respectively. Meaning the server accessed the connect on some level and let you know ops that is not allowed vs giving you back nothing. Tells us that we are making contact. From there you just need to keep digging. And of course fix the security issues.

07.30.2025

Getting AI past being a spell check

From investment firms concerns to security issues, AI is still trying to get there. It is still an idea in search of being a product. AI is not yet a product like an iPhone, that the next money earning wave can be shouldered. In a recent Bloomberg News article hey noted that some hackers were able to instruct an AI tool to basically un-patch itself. In other example hackers were able to inject their code into things coders used. When the coder used the AI tool it returned code with intended security holes in it. If we take a step back from the business side of using AI and look at it as security engineers, the first thing to see is the AI tools and the code it returns to the coder, is not check by AI. Our want from AI is that the code it gives back would be bug free. And in time AI should replace the general EDR system. At this stage AI is a faster search engine with better input tools. It is no yet a solution that will write error-security-hole-free code or write original security monitoring apps. The challenge for us the security engineer is that the cost savings to let AI output 80%+ of the code the company will us, has bugs and holes in it. The financial savings part is a hard point to overcome when security holes are an abstract until someone tells AI to un-patch its code and we are hit with ransomware. When the company saved costs by reducing the coders we have from 100 to 30, saying but we could get hacked, seems hollow at best. Tough being us.

07.23.2025

Use the tools

A common situation that happens is most companies is not building out a security app after deployment. In next gen firewalls, SIEMs, EDRs, etc. often we do not use all the available options. We get into habits and trying a unused feature takes on a risks element to trying. We often find very few of the security features are used in next gen firewalls. We often do not mine data out of our SIEMs. Build automatic workflows. Build custom alerts based on knowing our environments. Out of the box are the words guiding us. We rely on the vendor assuming they are covering everything when they are covering a enough for it to work. Some of that is the sense that we just spend a ton of. cash on the app, why do we have to do the work. Fair enough, but, security apps are tools. They are frameworks you build off of. Security apps are not a end in themselves and are open to expanding the use. It is the goal to protect the enterprise, and that includes building out tools after the deployment. And that means learning new skills and getting better at new things.

07.21.2025

One small thought

Always deal with remote execution vulnerabilities in your compute environment. And not just OS related ones. Most of us are okay on patching the OS. Less are good about updating apps and related things. Having a web server with a remote execution exploit with ports 80 and/or 8088, will be discovered. It is not hard to discover websites by scanning for domain names and then determining OS, web server brand, and the relative ages of all that. So if a hacker finds payme.highworldfinding.com scans for the open ports and finds port 80 open, they can run Nikto or the like and find vulnerabilities and other bad things. Patching apps and doing some basic pen testing is a cheaper way to securing environment. And it is not the hardest operational process to put in place. And final thought, a vulnerability management solution generally points out the exploits, they don’t fix them. That part is on us.

07.18.2025

Network Packet Sniffer, VM scanning, and Alerting

There is activity that happens continually in a computing environment, that largely go detected or even watched. Think of the general monitoring tool. Most do not have network packet inspection. So if, someone is scanning your network from the outside, other then some things on the firewall, would will alert us? Curious about the gaps EDR’s have I did some testing were I used Wireshark and then ran a series of tools, such as, nmap, zenmap, recon-ng, etc, to see what wireshark would show, and what a custom alert or monitor rule could be used to detect things. The first stark result is wireshark displayed a lot of red traffic and from my Kali laptop as the source. The least that happens in the firewalls and VPN devices should have events. And if the network devices are configured data sources for the EDR’s or SIEMs, we should at least get some raw logs information. There is also clues that come out of barrier system raw logs. If we see X access attempts from a blacklisted IP or country in y seconds/minutes, on your SIEM/EDR systems, there is a good chance there is not a solid rule (or at all) configured on the barrier device. I had one situation where a backup VPN device was out of date and did not have a rule to block hack attempts to that device. The device was discovered and parties were send thousand of attempts, because they could. Once a rule was added that helped reduce the attacks.

07.17.2025

Your item has shipped, click here to track it!

But wait all it say is the label has been created, not shipped. My guess it is a marketing thing trying to be too cute. Invoke the rush that my stuff is on the way, when it is not. Brand trust is a critical thing. After getting the message your item has shipped 2-3 times when only the label was printed, creates negative reactions toward the firm in customers minds. Oh they are playing me. Getting my hopes up like a cheap thrill. Tech needs to help build brand loyalty and trust. We should have the message be the label has been created, almost there. Then send the item has shipped message, whwn it really was shipped. But only when it has in fact shipped. Seems we are missing an opportunity saying an item has shipped when only the label was created. We are all in sales as the saying goes. Marketing please take note. Stop making our jobs harder.

07.16.2027

More Fun with Python

It maybe silly of me but I get a kick out of running pyhton -m http.server 800, then going to a browser and typing http://localhost:8000 and seeing the directory tree. And navigating the folder structure and opening files without vi or the like, bridges the system OS gaps. All that I was doing on a kali linux Thinkpad Carbon X1 5th gen. I was also running some test to see what Wireshark picks up when creating reverse shells using python to create TCP/UDP stacks, etc. Sure Wireshark is a packet sniffer and most EDR’s are not, but I want to understand why EDR’s cannot detect someone creating a python UDP client and then running a python ncat type tool? Part of the game. Black hat are trying to find the gaps and evade sec-app-tools. It is good to learn how and where the licenced app we spend huge money on fail and don’t even look for.

07.15.2025

SAAS paid monitoring is not a get out of jail free card. There is a lot that does not get watched. Some solutions, well known ones at that, use. tactics to show us a lot of data which does not line up. In one such tool there was an event where host 1 comminuted with host 2 for the first time. And that generated an alert. However, in the event data stream there were 150 account lockout events. After much searching the two sets of alerts were unrelated. The act host 1 talked to host 2, had nothing to do with the lockout events. So why are you showing me the huge blob of data? Well that is just how it works. And I should know to just like a tab for “alerts” that is for the “actual alerts”. Which to me was not the point. I asked several times why the extra unrelated data was being shown, and no answer was coming. The vendor product support engineer had no idea. It really was just how it worked. Personally, I think it is to give the sense more is being done than there is. The service is a paid SAAS more data even disjoined, is better, for them. The other issues was why was I not getting alerts directly for lockout events? I had to write a custom alert to find that on my own. Paid IR is are magic words. They are the promise of for a little money we will cover everything. Of course, I am more critical these days. After hard work to understand what I am looking at, I can see and find the gaps. I can see what is being covered and watched and what is not. Many think all data is recorded to a log and an EDR or SIEM just catch it all. But even in an AWS we have to capture data to a S3 bucket and create custom alerts to be warned. But many just think the money paid for a service just does that. Not as much as we wish. If we write crap API code and don’t make it secure, monitoring solutions will not find that stuff. If your web site always frame insertions, you have to use OSINT tools for that. Some enterprise solutions don’t even monitor is a child process has the wrong root process ID, or, notepad is making LDAP queries. But because we get event ID 4740 for account lockouts (from the DC security logs), it gives us the sense all things are covered. No tool paid or otherwise can do everything. But, we need to see and mind the gaps, before we paid up for 3 years.

07.09.2025

Pen Test Yourself

A few times a year I like to run pen tests against my home devices and network. The normal things of what vulnerabilities that maybe showing up. I may start with Armitage and do a nmap, to see what is showing up, and what services. You can use zenmap, and other tools. And switch to Metasploit if Armitage does not have all the exploits available. Like any pen test these tools may not find everything. So have a little fun and see if you can gain access to your own systems. See what the kid next door wanting to be a hacker and try to access you. Be the kid hack yourself. If nothing else you learn more how to find things, and understand things. Many security engineers come from Windows OS backgrounds, so, the habit is if you patch the OS and have anti-virus running you are good. Right? Plus, the more you can pen test yourself, the more you see the gaps EDR and other monitoring/scanning tools miss, or just don’t look at. Not all security tools check child processes having the same root process, as the root process does. The first defense is always patch the OS and apps, but also update apps. Then pen test your own kingdom.

07.02.2025

Just a musing…

With AI it is supposed to give us better searches. However, I have been finding that is not always the case. I searched for what beers available in the 1960’s are still sold today, and it only gave me a list of beers no longer available. In some search engines it seems like the first thing it gives us is an AI generated summary of a topic. Then everything additional is their old searches. Then in other ways, they try to make it seem AI is writing something on the fly. It seems more the search is finding a Python script and just displaying the result line by line making it look like AI is writing it. Marketing. But, the blob of data when you search on what does expensive query mean, it does write precise writing. Very uniformed. You can tell a human did not write it. That is great for dictionary type things. I have never used an AI tool to write a term paper, but I think it would be noticeable the student did not write the term paper. I mean each person has their own way of presenting writing. Most of us are not so precise. In a professional chess match they track how well each player picks the correct next move. Most grand masters are in the 90% ranges, but never 100% correct. You can ask humans to write an article about global warming and some people will be wordy and put in too many, one the other hand things. AI gives a precise document on a topic with a lot of data to get the article from. Then output the data in a precise way. And most code AI gives us is not perfect. Someone still has to review the code. It appears to be writing code on the fly, but then it should be original and not have bugs. Who wants to bet there will be a future event, where coding bugs in a DLL blowed up the world , because every AI used the same DLL ito learned from?

07.01.2025

Tech Tips - The EDR Trap

There is a habit we have bore out of too much to do and tech changing too fast, to over trust EDR apps. We tend to think that an EDR system is covering every process and activity going on our computers. The true is closer to they only monitor a small number of things. Vendors or staff has to code every rule that looks for things that need to be alerted on. That would be a too costly effort. No vendor app could be affordable if they covered everything. And that is not taking into account the resources needed. Next time you have a security monitoring vendor in for a dog and pony show as the questions, If someone tries an API injection hack, do you monitor that, and, if a child process changes the root process ID, do they alert on that? Sure they will tell you if someone was added to the domain admins group, or, sets a user password to expire, or, reads DC security logs for event ID 4740, most do. They tend to give you these types of alerts, but go less deep than you think. When you buy an EDR system you think it is covering more than it is and because we spend money, it is the vendors problem to cover things. The best we can do is pick flexible solutions that allow custom alerts and rule from the collected data. And don’t assume if you point a data source to a SIEM that everything is picked up. In most cases if you have an in-house app and credit cards numbers are recorded to a log, don’t assume that log is read and the CC write is captured to the EDR or SIEM. Generally you have to create a collection type and write custom rules to generate notices. If you create an app in AWS you have to create the collection and write custom rules in Trail-guard. Then port that collected data to a SIEM. It is not magic. We still have to do the work.

06.27.2025

Tech Top - Zenmap

One Security tool that most people find helpful and easier than most is Zenmap. Zenmap has a nice GUI UI that is easy to understand and use. With Zenmap you can scan whole IP subnet or a single host. There are an array of scan levels from intense to ping to regular scan; depending on you needs. From the scans you see a lot of information such as open ports, the type/brand of router, and host details. I tend to use Zenmap for internal as a house keeping tool. It shows me what hosts have appeared in different parts of the internal network. What OS or its last boot. Zenmap may not be used for pen test where nmap maybe more common, but it has its place in your tool box.

If you search Zenmap usages you may see it’s a limited vulnerability scanner. This tool is more of network and host discovery thing. Nmap and other tools are better for vulnerabilities. And that is from an external testing perspective. For internal vulnerability scanning use something like Rapid7 VM or like tool.

06.26.2025

Tech Tip - Using Python to create TCP/UDP Clients and Servers.

When pen testing sometimes access to a target requires a TCP client ro test service or send data. To do these we can use Python to create a TCP/UDP client. This involves a basic amount of code

Import socket, Targethost =, argetport, client = socket.sock(socket.AF_INET, socket.SOCK_STREAM), etc. You get the basic idea. There are like coding to make it a UDP client and a server. From there we can use python to create a sniffer or use Burp Suite and other tools. Python can give us the tools to probe systems that do not have the tools install.

And, it is good to know how these things work. Hackers sure do. Us on the inside need to know how things work to safeguard against the bad guys.

06.25.2025

How could this happen????

That question came up when it was discovered that a coin miner had been install on a linux server in company X’s server farm. That was not such a mystery. The server in question was a web server with port 80 open to the internet. Then the server was running an out of service version of Tomcat. The unwelcome guest just used an OSINT app like recong-ng to get a list of domain names and IP’s, and then ran an app like Nikto, Metasploit, or even nmap to discover vulnerabilities. With the open port 80 they had a good idea it was a web server and then what kind of web server. Then it was just a matter of exploiting a remote execution and gain access to the box. There are two things to be learned from this. The first is always update the apps on a server. Just patching the OS is not enough. The second is many servers are public facing and there are many OSINT tools that anyone can use to discover you. I regularity run recon-ng to see if senior team members went into the wild and registered their corp name at an event or conference (happens a lot and they use the same internal password).

But when we are spending a ton of cash on SAAS solutions we think they cover more than they do.

06.24.2025

Small tech tip.

Have you ever been trying to test somethings or see how a open. source app works but needed a web site? There is a simple option to having that. It is using Python3. Python3 has a command python3 -m http.server 9000 (port). That starts a mini website. If you open a browser and type http://localhost:9000, it will display a folder list. Using this simple website you can run a scan using Nikto to see what security holes there is or test using Burp suite. It is limited but useful.

03.24.2025 The Art of Cyber Tech

If anyone spends any time in. the red-blue team activities or even the black-gray-white-hat areas, you may have noticed that there are not a lot of pen testing tools around files. Some monitoring tools will have detection rules for downloads and uploads of files. They may have detection rules that say user bob1 downloaded 11 files from SharePoint or OneNote, and say that extra file is X% more than the rule was set for. And what do you do with that other than contact the user? If the user says no and some hacker stole files, it is kind of like counting how many chickens you have left after the fox had visited. For defense against file theft, you have to created the good defenses from the outside in. You need some DLP tools to help prevent files leaving the coupe. And have good internal ACL controls and isolation of critical information sources. Because if you look at the penetration tools. included in/with Kali Linux, there are really no tools for accessing files. The core reason is if you get inside the hen house in root ways, no tools are needed but copy. There is the old joke of the dreaded right click attack when some is both an AD and WMware admin, you could steal all of AD with a right click and copy. Alot of all that is because in most orgs minds monitoring tools to. guard entry into the hen house is the same as internal controls. DLP seems like a nice thing to get too later. Once inside the hen house the fox only has to grab and run.

03.05.2025 The Art of Cyber Tech

There is one major difference between the hacker and the organization, hackers are very hands on and businesses want set and forget. The average CISO wants a handful of silver bullets where they pay a vendor to. man the watchtower watching for an attack. That remains the case even though most hacks into and organization go undiscovered. We separated ransomware other hacks because how each is conducted. But hacks, into a business are most often finding an unknown weakness. Before Log4J no security app scanned for that. There was no event being written to event logs (though most people don’t know how EDR’s work). Nonetheless, companies pay for solutions they want to turn on and wait for email alerts and notices from the vendor. Half of that is too many tasks and too little time, and, being UI junkies. And it takes time and effort to learn how things work under the covers, so wanting a tool to tell you there is an event comes with the issue. Cyber security is a lot about knowing what your treasure is; what hackers will try to steal, and what is abnormal-normal in your environment. And each environment is different. The security tools you can buy shoot to cover the most common things. In your company the weak links are not likely to be the common things, so, vendors solutions are less likely to cover you. I know it is a hard pill to swallow. No one wants such an open ended set of tasks. It is easier to tell ourselves we can pay some money and not have to worry about security. But the hands on is where the hackers live. They get into the weeds looking for a weakness and exploit it. And they use the same tools white hat admin use to redefine their hacks until it goes undetected.

03.03.2025 Blog 3 Scott Steenburgh

Sabotage in the Kingdom…..sometimes when tech people stay in the same company and role for too long, they start to feel a sense of ownership. They start to forget that they work for an organization, an organization that has goals and objectives. We get to a place where our own preferences seem to us to be the same as the organizations. Though that is rarely the case. But we tell ourselves what is right for me must be right for everyone, why else would I what I do if it was not right? Then is some cases a peer or someone in another department does something we disagree with, so we throw them under the bus. We willfully cause them to fail. All the while we believe that action does no harm to the organization. The loser broke our rules after all. Sometimes when I have seen this happen and it comes to light with the leadership, the tech admin are shocked negative reactions came back to them. And again this happens when we are in the same company doing the same role for too long. That builds complancancy. It becomes a loop where we say I must be right because I have been here 12 years. If I were wrong sabotaging someone, I would have been booted out a long time ago. Such behavior is hart to see and clearly see. That means organizations have to always build a team culture of what is good and what is bad behavior towards each other.

01.17.2024

Technology has since 2007, sped up to such a rate that some projects can be behind before they are completed. Given the reality of FY and budgeting, the approval cycle can last longer than it takes to start buying and deploying a solution. Then if we guess wrong on the technology, it cannot be easily undone. This has been happening often the last 5+ years. IT makes a choice and maybe the vendors sales team stretched the promise a little, then you realize the mistake. But, likely we have a 3 year deal (to make the budget work), and CIO/CISO are on the hook, so you live with it. In the meanwhile, the right solution keeps evolving. By the time we get to where the wrong choice can be replaced, everything has changed. The first thought to getting around this is to view the tech we use as part of the business processes. And the leadership team needs to buy into that. At the very least we all have to give the speed of technological a good deal of respect. We cannot think it is a basic tool, like a printer. It is not often that companies factor how fast tech changes. Perhaps we need to do that.