• Windows Server 2016 - A strategy for securing privileged access It has to be said that no matter how secure you can make an operating system or service, it is only as
    secure as the weakest password. For example, suppose that you have the most sensitive data on earth
    and you encrypt it by using the most sophisticated technology, but then you use a password like “Password01”; this utterly defeats the purpose of putting in place a battery of secure technologies.

    Let’s look at another scenario. Walk around your office and count how many people have written their passwords on notes and stuck them on their keyboards or monitors. Then, observe how many people have pictures of their family or pets on their desk. When those people need to think of a password, what is the likelihood that it might be something personal based on the pictures?

    Now, let’s consider a final scenario: the social engineering attack. With this particular form of attack— which is a leading cause of security breaks—the attacker calls someone, out of the blue, and pretends to be from IT, saying he needs to verify some account information. If the attacker is good at his job, the chances are high that the hapless victim will readily provide the information.

    With those scenarios in mind, the attacker will gain access to something and potentially use that access to perform an escalated attack. But, what if the account were a privileged one in the first place.

    Securing privileged access is not a single technology; it is a set of practices that an organization can
    implement to become more secure. Although focused primarily on privileged access, it highlights the
    need for any organization to implement and test all policies related to security and conduct the
    necessary readiness to make people aware of potential areas of exposure.

    No network to which users have access will ever be 100 percent secure, but to begin down the path of
    securing privileged access to systems and networks, you must be diligent with regard to the following
    basics:

     Updates Deploy updates to domain controllers within seven days of release.

     Remove users as local administrators Monitor and remove users from local administrators if they don’t need this access. Use Active Directory to control membership centrally, if required.

     Baseline security policies Deploy policies that will maintain a standard configuration for the
    organization. Exceptions will exist, of course, based on applications and certain requirements, but
    these should be challenged on a repeated basis to ensure that the system is as compliant as
    possible.

     Antimalware programs Maintain regular updating and regular scans of the environment. Clean
    and remove threats as quickly as possible.

     Log and analysis Capture security information, perform regular reviews, and identify anomalies
    within the log set. Perform follow-up action on each detected item to ensure that it is an identified source and safe “risk.”

     Software inventory and deployment Controlling the software installed in an environment is paramount to ensure that end users don’t install malware into the environment. In the same
     , it is important to know what software is out there and maintain an inventory so that you
    are aware if the state of a system has changed.

    With these basics covered, we can move into more details about the strategy that underpins securing
    privileged access. Be aware that you will not achieve this strategy overnight, and this should be built
    as a progressive implementation so that the organization’s practices can change and adapt to these
    new principles.

    As with most strategies, you need to establish short-, medium-, and long-term goals. The following
    table describes the goals and the time frames you should use as well as the areas of focus for each
    goal.

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Windows Server 2016 Audit PNP Activity Found in the Detailed Tracking category, you can use the Audit PNP Activity subcategory to audit when plug-and-play detects an external device. Only Success audits are recorded for this category.

    Additional changes have been made in Windows Server 2016 that expose more information to help
    you identify and address threats quickly. The following table provides more information:


    Kernel Default Audit Policy
    In previous releases, the kernel depended on the LSA to retrieve information in some of its events. In Server 2016, the process creation events audit policy is automatically turned on until an actual audit policy is received from the LSA. This results in better auditing of services that might start before the LSA starts


    Default process Security ACL (SACL) to LSASS.exe
    A default process, SACL was added to LSASS.exe to log processes attempting to access LSASS.exe. The SACL is L"S:(AU;SAFA;0x0010;;;WD)". You can turn this on under Advanced Audit Policy Configuration|Object Access|Audit Kernel Object.


    New fields in the sign-in event
    The sign-in event ID 4624 has been updated to include more verbose information to make them easier to analyze. The following fields have been added to event 4624:

     MachineLogon String: yes or no
    If the account that signed in to the PC is a computer account, this field will be yes; otherwise, the field is no.

     ElevatedToken String: yes or no
    If the account that signed in to the PC is an administrative sign-in, this field will be yes; otherwise, the field is no. Additionally, if this is part of a split token, the linked login ID (LSAP_LOGON_SESSION) will also be shown.

     TargetOutboundUserName String and TargetOutboundUserDomain String
    The user name and domain of the identity that was created by the LogonUser method for outbound traffic.

     VirtualAccount String: yes or no
    If the account that signed in to the PC is a virtual account, this field will be yes; otherwise, the field is no.

     GroupMembership String
    A list of all of the groups in the user’s token.

     RestrictedAdminMode String: yes or no

    If the user signs in to the PC in restricted admin mode with Remote Desktop, this field will be yes.


    New fields in the process creation event
    The sign-in event ID 4688 has been updated to include more verbose information to make it easier to analyze. The following fields have been added to event 4688:

     TargetUserSid String
    The SID of the target principal.

     TargetUserName String
    The account name of the target user.

     TargetDomainName String
    The domain of the target user.

     TargetLogonId String
    The logon ID of the target user.

     ParentProcessName String
    The name of the creator process.

     ParentProcessId String
    A pointer to the actual parent process if it's different from the creator process.


    Security Account Manager (SAM) events
    New SAM events were added to cover SAM APIs that perform read/query operations. In previous versions of Windows, only write operations were audited. The new events are event ID 4798 and event ID 4799. The following
    APIs are now audited:
    SamrEnumerateGroupsInDomain
    SamrEnumerateUsersInDomain
    SamrEnumerateAliasesInDomain
    SamrGetAliasMembership
    SamrLookupNamesInDomain
    SamrLookupIdsInDomain
    SamrQueryInformationUser
    SamrQueryInformationGroup
    SamrQueryInformationUserAlias
    SamrGetMembersInGroup
    SamrGetMembersInAlias
    SamrGetUserDomainPasswordInformation


    Boot Configuration Database (BCD) events
    Event ID 4826 has been added to track the following changes to the BCD:
    DEP/NEX settings
    Test signing
    PCAT SB simulation
    Debug
    Boot debug
    Integrity Services
    Disable Winload debugging menu


    PNP Events
    Event ID 6416 has been added to track when an external device is detected through plug-and-play. One important scenario is if an external device that contains malware is inserted into a high-value machine that doesn’t expect this type of action, such as a domain controller.

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Remote credential guard Remote credential guard provides protection against your credentials being stolen when you are remotely connected to a system via a remote desktop session.

    When a user attempts to remote desktop to a remote host, the Kerberos request is redirected back to the originating host for authentication. The credential simply does not exist on the remote host any more. If a remote host (i.e., an end user’s computer or server) has malicious code running on it that can obtain credentials, remote credential guard will mitigate this because no credentials will be passed into the remote host.

    There are some requirements for remote credential guard to operate:
     The user must be joined to the same Active Directory domain or a remote desktop server must be joined to a domain with a trust relationship to the client device’s domain.
     They must use Kerberos authentication.
     They must be running at least Windows 10, version 1607 or Windows Server 2016.
     The Remote Desktop classic Windows app is required. The Remote Desktop Universal Windows
    Platform app doesn't support Remote Credential Guard.

    To turn on remote credential guard, you can configure this via a group policy and widely deploy this
    across your estate.

    To configure this via group policy, open the Group Policy Management Console, and then go to Computer Configuration -> Administrative Templates -> System -> Credentials Delegation. Next,
    double-click Restrict Delegation To Remote Servers, and then select Require Remote Credential
    Guard. Finally, click OK and run gpudpate /force to push the group policy out.

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Enhanced Kernel Mode protection using Hypervisor Code Integrity The core functionality and protection of Device Guard begins at the hardware level. Devices that have processors equipped with SLAT technologies and virtualization extensions, such as Intel VT x and AMD V, will be able to take advantage of a Virtualization Based Security (VBS) environment that dramatically enhances Windows security by isolating critical Windows services from the operating system itself.

    Device Guard uses VBS to isolate its Hypervisor Code Integrity (HVCI) service, which makes it possible for Device Guard to help protect kernel mode processes and drivers from vulnerability exploits and zero-day attacks. HVCI uses the processor’s functionality to force all software running in kernel mode to safely allocate memory. This means that after memory has been allocated, its state must be changed from writable to read-only or run-only. By forcing memory into these states, it helps to ensure that attacks are unable to inject malicious code into Kernel mode processes and drivers through techniques such as buffer overruns or heap spraying.

    To deliver this level of security, Device Guard has the following hardware and software requirements:
     UEFI Secure Boot (optionally with a non-Microsoft UEFI CA removed from the UEFI database)
     Virtualization support turned on by default in the system firmware (BIOS):
     Virtualization extensions (for example, Intel VT-x and AMD RVI)
     SLAT (for example, Intel EPT and AMD RVI)
     IOMMU (for example, Intel VT-d, AMD-Vi
     UEFI BIOS configured to prevent an unauthorized user from disabling Device Guard–dependent hardware security features (for example, Secure Boot)
     Kernel-mode drivers signed and compatible with hypervisor-enforced code integrity

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Shielded VMs Today, in most virtual environments there are many types of administrators who have access to VM
    assets, such as storage. That includes virtualization administrators, storage administrators, network
    administrators, backup administrators, just to name just a few. Many organizations including hosting
    providers need a way to secure VMs—even from administrators—which is exactly what shielded VMs provides. Keep in mind that this protection from administrators is needed for a number of reasons.

    Here are just a few:

     Phishing attacks
     Stolen administrator credentials
     Insider attacks

    Shielded VMs provide protection for the data and state of the VM against inspection, theft, and tampering from administrator privileges. Shielded VMs work for Generation 2 VMs that provide the
    necessary secure startup, UEFI firmware, and virtual Trusted Platform Module (vTPM) 2.0 support
    required. Although the Microsoft Hyper-V hosts must be running Windows Server 2016, the guest OS in the VM can be Windows Server 2012 or above.

    A new Host Guardian Service instance is deployed in the environment, which stores the keys required for an approved Hyper-V host that can prove its health to run shielded VMs.

    A shielded VM provides the following benefits:
     BitLocker encrypted drives (utilizing its vTPM)
     A hardened VM worker process (VMWP) that encrypts live migration traffic in addition to its runtime state file, saved state, checkpoints, and even Hyper-V Replica files
     No console access in addition to blocking Windows PowerShell Direct, Guest File Copy Integration Components, and other services that provide possible paths from a user or process with
    administrative privileges to the VM

    How is this security possible? First, it’s important that the Hyper-V host has not been compromised
    before the required keys to access VM resources are released from the Host Guardian Service (HGS).
    This attestation can happen in one of two ways. The preferred way is by using the TPM 2.0 that is
    present in the Hyper-V host. Using the TPM, the boot path of the server is assured, which guarantees
    no malware or root kits are on the server that could compromise the security. The TPM secures communication to and from the HGS attestation service. For hosts that do not have a TPM 2.0, an
    alternate Active Directory–based attestation is possible; however, this merely checks whether the host is part of a configured Active Directory group. Therefore, it does not provide the same levels of
    assurance and protection from binary meddling and thus host administrator privileges for a
    sophisticated attacker. However, the same shielded VM features are available.

    After a host undergoes the attestation, it receives a health certificate from the attestation service on
    the HGS that authorizes the host to get keys released from the key protection service that also runs
    on the HGS. The keys are encrypted during transmission and can be decrypted only within a protected enclave that is new to Windows 10 and Windows Server 2016 (more on that later). These keys can then be used to decrypt the vTPM to make it possible for the VM to access its BitLocker-protected storage and start the VM. Therefore, only if a host is authorized and noncompromised will it be able to get the required key and turn on the VM’s access to the encrypted storage (not the administrator, though, as the virtual hard drive (VHD) remains encrypted on the drive).

    At this point, it might be self-defeating: If I am an administrator on the Hyper-V and the keys are
    released to the host to start the VM, I would be able to gain access to the memory of the host and
    get the keys, thus nullifying the very security that should protect VMs from administrative privileges.
    Fortunately, another new feature in Windows 10 and Windows Server 2016 prevents this from
    happening. This feature is the protected enclave mentioned earlier, which is known as Virtual Secure
    Mode (VSM). A number of components use this service, including Credential Guard. VSM is a secure execution environment in which secrets and keys are maintained and critical security processes run as Trustlets (small trusted processes) in a secure virtualized partition.

    This is not a Hyper-V VM; rather, think of it like a small virtual safe that is protected by virtualization based on technologies such as Second Level Address Translation (SLAT) to prevent people from trying to directly access memory, I/O Memory Management Unit (IOMMU) to protect against Direct Memory Access (DMA) attacks, and so on. The Windows operating system, even the kernel, has no access to VSM. Only safe processes (Trustlets) that are Microsoft signed are allowed to cross the “bridge” to access VSM. A vTPM Trustlet is used for the vTPM of each VM, separate from the rest of the VM process, which runs in a new type of protected VM worker process. This means that there is no way to access the memory used to store these keys, even with complete kernel access. If I'm running with a debugger attached, for example, that would be flagged as part of the attestation process, the health check would fail, and the keys would not be released to the host. Remember I mentioned the keys from the key protection service are sent encrypted? It's the VSM that decrypts them, always keeping the decrypted key protected from the host OS.

    When you put all of this together, you have the ability to create a secure VM environment that is
    protected from any level of administrator (when using TPM 2.0 in the host) and will close a security
    hole many environments cannot close today.

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Windows Server 2016 container What is a container?
    A container in its simplest form is exactly that—a container. It is an isolated environment in which you can run an application without fear of changes due to applications or configuration. Containers share key components (kernel, system drivers, and so on) that can reduce startup time and provide greater density than you can achieve with a VM.

    The interesting thing about containers is the application itself. The application might have various
    dependencies that it requires to run. These dependencies exist only within the container itself. This
    means that something bad that happens to Application A and the binaries it depends on has no
    impact on Application B and the binaries on which it depends. For example, in most environments, if
    you delete the registry from Application A, the consequences are disastrous for both Application A
    and Application B. However, with containers, Application A and Application B are each self-contained, and the change to the registry for Application A does not affect Application B.

    Because all binaries and dependencies are hosted within the container, the application running in the
    container is completely portable. Essentially, this means that you can deploy a container to any host
    running the container manager software, and it will start and run without any modification. For
    example, a developer can begin developing his application and deploy it into a Hyper-V Container
    using Windows 10 Anniversary Edition. When he is ready to roll it out in production, it can be run on
    Windows Server 2016, including Nano Server, in a public, private, or hybrid cloud. Containers are built on layers. The first layer is the base layer. This is the OS image on which all other layers will be built. This image is stored in an image repository so that you can reference it when necessary. The next layer (and sometimes the final layer) is the application framework layer that can be shared between all of your applications. For example, if your base layer is Windows Server Core, your
    application framework layer could be .NET Framework and Internet Information Services (IIS). The
    second layer can also be stored as an image, which, when called, also describes its dependency on the
    base layer of Windows Server Core. Finally, the application layer is where the application itself is
    stored, with references to the application framework layer and, in turn, to the base layer.

    The base layer and the application layer can be referenced at any time by any other application
    container you create. Each layer is considered read-only except the top layer of the “image” you are
    deploying. For example, if you deploy a container that depends only on the Windows Server Core
    image, this Windows Server Core layer is the top layer of the container and a sandbox is put in place
    to store all the writes and changes made during runtime. You can then store the changes made as
    another image for later reuse. The same applies if you deploy the application framework layer image;
    this layer would have its own sandbox, and if you deploy your application to it, you can then save the
    sandbox as a reusable image.

    Basically, when you deploy a container to a host, the host determines whether it has the base layer. If not, it pulls the base layer from an image repository. Next, it repeats the process for the application framework layer and then creates the application container that you were originally trying to deploy. If
    you then want to create another container with the same dependencies, you simply issue a command
    to create the new application container, and it is provisioned almost immediately because all of the
    dependencies are already in place. If you have an application container that depends on a different
    application framework layer as well as on the original Windows Server Core base layer, you can simply pull the different application framework layer from an image store and start the new application container.

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Nano Server Nano Server is an exciting new installation option for Windows Server 2016 that has an even smaller
    footprint than the Server Core installation option.

    Nano Server is a new, small-footprint, headless installation option for Windows Server 2016. It is a
    deep refactoring of Windows Server that is optimized for the cloud. As such, Nano Server in Windows Server 2016 is ideal for the following scenarios:

     Compute Host for Hyper-V or part of a Windows Failover Cluster
     Container Host
     Storage Host for a Scale-Out File Server (SOFS)
     DNS server
     Web server running IIS
     Application Platform for apps that are built using cloud development patterns and run in a container and/or VM guest

    Nano Server is fully headless; thus, it might require some changes to management and operations
    procedures for organizations that aren’t fully managing their current server deployments remotely.

    Windows Server customers have provided this feedback:

     Reboots have a negative impact on my business—why do I need to reboot because of a patch to a feature I never use?
     When a reboot is required, my servers need to be back in service as soon as possible.
     Large server images take a long time to deploy and consume a lot of network bandwidth.
     If the operating system consumes fewer resources, I can increase my virtual machine density.
     We can no longer afford the security risks of the "install everything everywhere" approach.

    Nano Server addresses these problems by including just the functionality required for its proposed
    use cases and nothing more. This minimizes the attack surface area, thus eliminating reboots and minimizing the footprint, which provides faster deployment and reboot time and frees up resources
    for other uses.

    Source of Information : Microsoft Introduction Windows Server 2016

    more
  • Microservices When it came to applications built for the web, we generally moved away from traditional n-tier
    architectures toward Services-Oriented Architecture (SOA). This was no easy task and put a lot of
    customers off rewriting their applications. SOA breaks down an application into components, which
    communicate with one another via some communication protocol.

    It could be said that SOA is the forefather of microservices, given that microservices breaks down even further to smaller components that each live and run as an individual process and communicate with one another in a language-agnostic fashion.

    Microservices foster more rapid development versus SOA. This is because the components that
    dictate a microservices model are far smaller than SOA. If you need to make a change to a component in microservices, you can develop, update, and deploy rapidly without affecting the operation of the other components. Each component is technically an independent contractor, so each has its own way of doing things and separate way of communicating. Because all of these components share a single communication model, this makes it simpler to improve parts of an application built on microservices.

    Service Fabric is a distributed systems platform that makes building microservices or translating your
    application into microservices architecture easy to do, while also giving you the means to manage the
    full lifecycle of an application. It is available both on-premises and in Azure as Azure Service Fabric.
    You can write an application once and deploy it on-premises or to Azure with no API change using, all while using common development tools like Microsoft Visual Studio.

    Service Fabric powers many Microsoft services today, including Azure SQL Database, Azure
    DocumentDB, Cortana, Power BI, Intune, Azure Event Hubs, Azure IoT, Skype for Business, and many other core Azure services. All the learnings from running these solutions have been incorporated into the Service Fabric product and will ensure that if your applications need a highly reliable and scalable solution, this is your microservices platform of choice.

    Source of Information : Microsoft Introduction Windows Server 2016

    more