Tuesday, December 18, 2018

My view and experience with IT certifications


I run into plenty of debates about whether IT certifications are good or bad, what are those people who hold those capable of, what are the expectations and so on. This post is not just about IT Security certs, but IT in general. Personally I love to do certifications and have plenty of them from various vendors, so I thought I will share my view and experience about the let's call it "certification industry".

People with and w/o certs

Personally I see and know basically two types of people: those who love to do certs or at least want to do a few and those who don't give a sh*t about certification. I want it to make clear: it doesn't reflect their skills or knowledge at all, some of the most skilled people I know have exactly 0 certs and don't really care about, and some of them have plenty or at least a few. I think this really comes down to personal preference. There are also a few people like me, who simply likes to do certifications, they are "certification monkeys" (I heard this term from Jeremy Cioara, who does excellent Cisco CBT videos).
There is another aspect to this. I regularly participate in technical job interviews for the past couple of year, nowadays for IT security people and in the past networking people. There are certain certifications in both roles, that if present will almost always involve that the person will have deep technical knowledge, can answer the questions, and in general make a successful interview, I specifically talk about OSCP, OSCE for IT Sec and CCIE for networks. I don't think it's only because of the actual 'certification' but the soft skills you need to achieve them. Assuming you don't cheat, and I want to believe most people don't, you really have to gain plenty of knowledge and put that into practice, keep persisting trying, learning, put your energy onto it, etc... so you can expect something solid from those people. There are people who claim plenty of experience, and some even after telling me doing webapp pentesting for years, can't even answer a simple question about XSS. Again, this is not everyone, there are super smart people without certs (see above) but the likelihood of having a low performer guy with good certs is smaller.
So as I see, there is this correlation: if you have a certification which is hard to achieve (OSCP, CCIE) you are most likely a capable person, and let's make it clear again: *it doesn't mean that if you don't have a certification you are not capable*.

There is also a phenomena, some people with OSCP, CCIE, etc... you name it, get high-minded. I hate that. Please don't! You are not better or smarter than others because of that. No problems with being proud of it, but it goes wrong if you place yourself above others because of having a badge.


Unfortunately there are also people who achieve plenty of certs with using *just* braindumps (preparation guides), and making an exam every other week. I think this not just make the certs less valuable overall, but morally I simply can't (and no one should) agree with it. Even personally I knew someone who did this, and get his CCSP cert in a month, which means he did an exam every week for 4 weeks. During a job or an interview it will very quickly turn out that someone gained his/her certs with just memorising braindumps.
On the other hand I must admit that with Cisco exams I also used braindumps after studying, but I will write about that in more detail below.

What knowledge to expect from certified people?

I often see comments trying to degrade or dignify certifications, and as I see those come from having a totally wrong expectation toward those people or the certification itself. First thing is that a certification doesn't replace experience. For example, if someone has an OSCP (but no experience), it doesn't mean that he/she is ready to find you 0 days, write kernel exploits, be a neat web app pentester or conduct a full red team operation at a company right away without any experience. On the other I believe that person will have a solid foundation that you can easily build on, and quickly put him into work, without too much training, he/she will do fine on his/her own pretty fast. Same is true for CCIE. Think about these like a university degree. Any of those people are ready to work right away out of education? No way! You need to spend weeks to teach them how to use the systems they will work on, and so on, and even then they will be considered beginners. But they have a solid IT foundation and a type of thinking that you can build on.
What to expect from people who have certification where the exam is a multi choice question? Well, definitely less than from those who passed practical hands-on exams. The personality of that person will play a lot there, but probably their knowledge is above 0. I'm not a big fan of these, although I did personally many of them. I will write about them in more detail below.
In short I think these will provide the person with good foundation, what you can build on. Nothing more, nothing less.

Why to certify?

There is definitely an advantage on the job market, especially with headhunters that if you have the right certs it will make you getting the job easier or at least passing the first round easier. Unfortunately many HR people have no idea what these really mean or involve, they just look for the keywords. I remember 10 years ago my colleague was asked by an IT(!!) headhunter: "Do you have such thing as CCIE?" She had no idea what's that and asked that so casually like every other person should have one. This is true to date, not everywhere, but most places. If you apply for a Cisco job, HR will pass your CV more easily to the technical staff if you have CCNA, CCNP, etc... This is unfortunate but I suppose we have to live with that and educate first round interviewers at the same time.
Beside the above personally I like to do them, because of the following:
  • It's a good challenge, and I like challenges.
  • It enforces me to study the material more in depth, and will make me remember for longer time.
  • I like to collect badges :D

Multiple choice vs practical exam

Obviously practical exams have the most value. I think that's a no brainer. On the other side you have the multi choice questions, and in my experience they can be further split.
1. Cisco style
Cisco is the typical exam, which I believe highly unfair. They put in plenty of such lexical questions, that no one on the Earth will know, or give you options that varies only slightly. For example, they had items like: "What is the colour of the Cisco wireless desktop agent if the connection is bad?" and you can chose from red, orange, yellow, and some other. Seriously why is this important at all, and who remembers this? It won't reflect your actual knowledge. Or the other type is where they give you a command with 4 very little variations, and it's not an every day used command they will give you. Now, typically on Cisco or probably on most enterprise grade network devices you will use tabs and question marks many times, because you can't remember every single command. You will know some, but certainly not all of them, and with such command line help available on the OS there is also no reason to remember them. Honestly this is why I used braindumps, as I believe these are unfair questions, and they are not targeted to properly assess the students' knowledge. Not 100% of the exam is such, but a significant part. In reality I don't know a single person who doesn't use braindumps, because of the previous reasons. I always learned the material and did plenty of practicing and I do feel that I know the stuff I took the exam for and thus I don't feel that I really cheated.
2. SANS style
SANS also uses multi choice question exams, but there is a big difference. One is that you can use the study material, which means that even if you get a lexical question you can look it up, although you need to know where to look for. Second is that typically you don't get such questions, but more of those where you actually need to apply what you have learned. I think this is much better. Typically you can do 2 practice tests before the exam, which will have similar style of questions than the real one, but not the same. I never used braindumps with SANS exams, as if you learn the material there is no need, and I passed all of my exams I took, for first.
3. EC-Council style
Maybe they changed it nowadays, but in the past their exam was a joke. A few lexical questions, and plenty of questions what you could answer with common sense, especially in the CEH/ECSA exam. In short it doesn't really reflect anything.


Probably this is where the certification industry gone mad, and this is the point where you will certainly feel that this is only done to harvest money, and you will quickly get disappointed. Ultimately the general concept behind renewal is to demonstrate that your knowledge is maintained/up-to-date and so on, and this is what none of the renewal methods actually ensure, at least those I know of. Here is why:
Cisco policy: in order to renew any associate/professional level certificate, you need to pass one exam from the same level or above, and you need to do it every 3 years. This effectively means that if I pass *any* professional exam I can renew both my CCNP Routing&Switching and Security certs. Passing let's say a switching exam has nothing to do with the security track, but it's still renewed. Why? It won't guarantee that my Cisco Security knowledge is up-to-date. In fact I have both of those, and while I still feel confident that I have a solid CCNP R&S level of knowledge, that's certainly not true for the Security part, and I can't get rid of that if I renew my other one. This just doesn't feel right. If the renewal doesn't fulfil its purpose why to have a renewal policy at all? Money? Renewing a CCIE is even worse, you need to pass the theory exam every 2 years, despite the fact that probably if you passed it, it's most likely so deeply sinked that you will remember it longer than any professional level material.
SANS policy: collect credits for 4 years, and if you have enough you can pay a fee, and there you go, you renewed your cert. You can collect credits with taking trainings (SANS training worth more than others...), going to conferences, etc... like CISSP. I could renew my malware reverse engineering cert with taking a forensics training. Why? That training is different and didn't really contribute to my reversing skills, certainly doesn't mean that you can still dissect a malware. Although SANS tries to make it look like it is, but if you are honest it doesn't. Again what you see here is that the renewal doesn't prove that your are still good at that topic, they just take your money.
EC-Council: similarly you need to collect credits. Exact same story as with SANS, but instead of paying a one time fee every 4 years, they ask for an annual fee. Why? Just to harvest people's money.
Offensive Security: No renewal. I like that. I think in order to pass the hands-on exam, you have to study the material so much, that it will sink in for a very long time.

I feel that the general concept of renewals is wrong at the core. You don't need to renew your university degree, although universities could easily claim that you forgot the material after a few years. I certainly don't remember the mathematics I learned for 2 years, I never used it, never really liked it, so it just faded away, and I think that will be true for every people.

Vendors still try to push for renewals and I feel it's only about trying to tie you to their trainings, exams, and get your money.


I have many-many thoughts on this topic, and it was pretty hard and long to write this post, and I steel feel that I couldn't phrase everything I wanted. I might be wrong with my view, but currently this is how I see things, and no one has to agree. The most important thought I would like people taking away from this post is the following:
  1. Certificate holders: Please don't be high-minded as what you have got is "just" a foundation and there are huge amount of super smart people without certs. No problem for being proud of it, but on a healthy level.
  2. Non-certificate holders: Please don't degrade certificate holders' achievement as in some cases what they achieved is really notable and not easy, and not everyone can do it.
I have a bit of fear that this post will generate a burst of hate from both sides, and vendors, but:


Windows Driver Signing Enforcement bypass

I uploaded all of the materials and files to my latest DSE bypass workshop, which I held at Defcon, hack.lu and Hacktivity to my Github page:



Friday, August 31, 2018

About WriteProcessMemory

The contents of this post might be very well known to many people, but for me it was new and honestly, also a bit shocking so I thought I will share it, it might be useful for others as well. I came across this behaviour when I was developing a working POC code for enSilo's new TurningTables technique.

In short WriteProcessMemory will write to PAGE_EXECUTE or PAGE_EXECUTE_READ pages if you have sufficient rights (PROCESS_VM_OPERATION) to change its permissions to PAGE_EXECUTE_READWRITE. I want to highlight in the beginning that this will not bypass any built-in security feature, nor exploit anything, this is just a convenience feature.

First I will cover how it works, and at the end why.

Part 1 - How?

This is how WriteProcessMemory works in the latest Windows (1803):

First it will call NtQueryVirtualMemory to get the properties of the region.

The next step is to check if the page has any of the following protections set: PAGE_NOACCESS(0x1) | PAGE_READONLY(0x2) | PAGE_EXECUTE (0x10) | PAGE_EXECUTE_READ (0x20) 

Looking on the check bitwise:

0xcc = 1100 1100
0x1  = 0000 0001
0x2  = 0000 0010
0x10 = 0001 0000
0x20 = 0010 0000

So if we perform the TEST instruction it will set the ZF flag if one of these settings are present. If not, it will go straight to the NtWriteVirtualMemory call, which means that the page has the WRITE bit set:

If the check indicates one of the protection set above, it will do another one:

This will jump if PAGE_NOACCESS or PAGE_READONLY is set, and we get an access denied as expected:

If not, it will do another two checks:

If the page is an MEM_IMAGE (0x1000000) and if it’s MEM_PRIVATE (0x20000) - if none of them, only then it will go to the same ACCESS_DENIED routine, otherwise it will set a value into EAX. That value is eventually passed in RSI to NtProtectVirtualMemory:

Now, what are those values:
0x20000000 - MEM_LARGE_PAGES (large page support)

This means that the OS will nicely change the page protection for us to writeable, without ever giving an access denied. In case it’s an image it will set it to write-copy, which means that it will create a private copy of the image loaded for the process, so it won’t overwrite shared memory.

After this the same NtWriteVirtualMemory will be called, what is shown above. Finally the page protection will be reverted to the original. Essentially we got write access to an EXECUTABLE only page - obviously only if our process has the permission to apply those changes, so it won't bypass any protection.

On older version of Windows 10, the function is slightly different but the logic is exactly the same:

On Windows 7 or 8 the behaviour also exists but the function logic is different. It will try set the memory to PAGE_EXECUTE_READWRITE or if that fails to PAGE_READWRITE right away:

Then it will check if the old protection was either PAGE_EXECUTE_READWRITE, PAGE_READWRITE or PAGE_WRITECOPY, if yes it will go and restore the original protection (as the memory is writeable) and write to it. If not it will check if it’s PAGE_NOACCESS | PAGE_READONLY. If yes, it will go and return ACCESS_DENIED, otherwise it will call NtWriteVirtualMemory… when the page protection is set to PAGE_EXECUTE_READWRITE/PAGE_READWRITE. Again shortcut to have write access to EXECUTABLE pages.

Here is the write:

The ReactOS code will reflect this behaviour:

Yes, you could also set the page protection yourself, but the OS will nicely do it for you, so one less thing to care about when developing an exploit. In my opinion based on MSDN this should fail however (but maybe I misinterpret it):

PAGE_EXECUTE - 0x10 - Enables execute access to the committed region of pages. An attempt to write to the committed region results in an access violation.
PAGE_EXECUTE_READ - 0x20 - Enables execute or read-only access to the committed region of pages. An attempt to write to the committed region results in an access violation.

What happens if we call NtWriteVirtualMemory directly? Then it fails as expected as the page protection is not modified, for example it will fail with:

0x8000000D - STATUS_PARTIAL_COPY - Because of protection conflicts, not all the requested bytes could be copied.

Part 2 - Why?

I found many mentions here and there that this will work, but essentially I contacted Microsoft for further explanation, and I got it, and I want to thank for them for providing these insights. Basically this is done for debuggers, in case debuggers wants to write to memory, they can simply call this API and no need to care for setting page protection every single time. Here are the details:

Here is what that above site says:

"There are a bunch of functions that allow you to manipulate the address space of other processes, like Write­Process­Memory and Virtual­Alloc­Ex. Of what possible legitimate use could they be? Why would one process need to go digging around inside the address space of another process, unless it was up to no good? These functions exist for debuggers. For example, when you ask the debugger to inspect the memory of the process being debugged, it uses Read­Process­Memory to do it. Similarly, when you ask the debugger to update the value of a variable in your process, it uses Write­Process­Memory to do it. And when you ask the debugger to set a breakpoint, it uses the Virtual­Protect­Ex function to change your code pages from read-execute to read-write-execute so that it can patch an int 3 into your program. If you ask the debugger to break into a process, it can use the Create­Remote­Thread function to inject a thread into the process that immediately calls Debug­Break. (The Debug­Break­Process was subsequently added to make this simpler.) But for general-purpose programming, these functions don't really have much valid use. They tend to be used for nefarious purposes like DLL injection and cheating at video games."

UPDATE 2018.09.02. - The story gets worse

So after writing this comes Alex Ionescu and makes it even worse :D

With that, the post wouldn’t be complete without explaining what Alex Ionescu pointed out, which I think much-much worse then the first part. So while spending the weekend my brain couldn't stop thinking about this, and when the light came I reached out to Alex.

You can use this function to write to kernel pages from user mode. This sounds terrible for first, second and also third, and so on, but you will see that is not that horrible, only a little bit. :) So why this happens:

When you call WriteProcessMemory it will call ntdll!NtWriteVirtualMemory which will eventually call nt!NtWriteVirtualMemory, which in newer Win10 versions will call nt!MiReadWriteVirtualMemory. That is the point where it is checked if you come from user land and can write to the targeted memory, to avoid writing to the kernel. But what is really being checked?

1. It will check if you the API is being called from the kernel or user space (PreviousMode).
2. If you come from user mode, it will perform another check which is verifying the address range you are trying to write to, based on the MmUserProbeAddress variable, which points to the end of the user address space. On x64 machines this is a hardcoded value in the code, so there is no actual variable, as you can see below in IDA.

Here is the related ReactOS code snippet for easier understanding (which reflects older Windows versions, but the idea is the same):


 2820    if (PreviousMode != KernelMode)
 2821     {
 2822         //
 2823         // Validate the read addresses
 2824         //
 2825         if ((((ULONG_PTR)BaseAddress + NumberOfBytesToWrite) < (ULONG_PTR)BaseAddress) ||
 2826             (((ULONG_PTR)Buffer + NumberOfBytesToWrite) < (ULONG_PTR)Buffer) ||
 2827             (((ULONG_PTR)BaseAddress + NumberOfBytesToWrite) > MmUserProbeAddress) ||
 2828             (((ULONG_PTR)Buffer + NumberOfBytesToWrite) > MmUserProbeAddress))
 2829         {

If you pass these checks the write will happen.

For kernel exploit writers the flaw is probably obvious at this point if you think about the classic SMEP bypass:
—> from page 31

Here is the issue in short:
If you can set the U/S (owner) bit to 0 (clear) in a PTE entry, it will mean that the page belongs to the kernel. Normally you don’t have any kernel pages in the user address space but if you manage to mess with the PTE (with a kernel exploit), you can have, and it will be valid - you can make a user page to being a kernel page. If that happens, you can use WriteProcessMemory to write to those pages as the actual PTE flag is not verified, which means that you write kernel pages from user mode.

Obviously this doesn’t happen normally, but still…

Additionally in older systems you could modify (for example with a w-w-w kernel exploit) the MmUserProbeAddress and set it to the end of the kernel address space, and at that point you also bypassed the verification, and you have a very nice R/W access to kernel space. Also: https://j00ru.vexillium.org/2011/06/smep-what-is-it-and-how-to-beat-it-on-windows/ These days you would need to patch the actual code, which is protected by the HVCI, PG, so it’s not really possible unless you exploit the hypervisor.

Overall potentially you can have write access to kernel address space from user mode, but not by default, and not in a straightforward way.

I want to thank again to Alex first for pointing this out, and than talking through this whole stuff with me.