A Blog About Self-Imposed IT Projects and Tech Exploration

Category: Home Hacking Lab (Page 1 of 2)

A collection of posts covering my step by step build of a home hacking lab using some old servers, Proxmox for virtualization, open source software, and evaluation copies of Windows Server. Using these I build a segmented network with a place for vulnerable machines, an Active Directory domain, firewall, email, DNS, and Security Onion for SIEM.

This lab is ready for testing hacking, practicing defense, or learning and practicing small scale system administration with a blend of Linux and Windows.

Firewall and DNS Configuration to Allow External Access

The final step for the internal network is enabling access to the DMZ network from external devices by changing the Firewall and DNS configuration. This involves configuring port forwarding to route external traffic to the appropriate internal devices. I also need to change the DNS configuration to route the traffic for the domain correctly. First step, port forwarding in pfsense.

Prerequisite: Installing a Network Firewall

Firewall Configuration in pfsense

To route external traffic to internal devices, we will configure port forwarding. This routes external traffic destined to certain ports, like port 443 for HTTPS, to the appropriate internal server in the DMZ. Within pfsense, we go to NAT settings, and port forward. The image below shows an example configuration to route inbound HTTP traffic on the WAN interface to my DMZ host 10.10.1.12.

For my lab, I also added rules for DNS, SMTP, and HTTPS.

pfsense port forwarding configuration

Due to my lab’s configuration using all private IP addresses and having a private IP on the WAN interface for the lab, I also had to remove the reserved network block.

uncheck the block private IP and loopback on WAN interface pfsense

To test the configuration, I used my external Kali machine to run an Nmap scan of port 80 and 443 of my firewall.

Configure DNS for External Access

The next step is to configure the simulated external DNS to route traffic to my lab network from the external network. I added an A record to the external Pihole.

A record for the domain.

I also need an A record for the email server.

Next, I need to create an MX record. For dnsmasq, this requires a custom configuration file.

# touch /etc/dnsmasq.d/99-mail.conf
# pihole restartdns

To check that the MX record is working, I use nslookup on the external Kali machine.

Now that the configuration is complete, I run a few Nmap scans to check that the ports are forwarded to the correct internal devices, and that I can scan by domain name.

With that, my Firewall and DNS configuration is complete and my lab is accessible from the simulated external network devices.

Adding a VulnHub Machine to the Proxmox Lab

I could practice and work on hacking the machines I already built, but another good addition to my lab is vulnerable machines. A good source of these is VulnHub. In this post I will cover how to add a VulnHub machine to Proxmox.

Prerequisite: Install Proxmox and Configure a Cluster

Download and Extract the Machine

The first step in adding the machine is to download it from vulnhub onto the host and extract it. For this example, I am using the machine Earth. To accomplish this, I entered the three commands below.

# mkdir vulnhub && cd vulnhub
# wget -O Earth.ova https://download.vulnhub.com/theplanets/Earth.ova
# tar xvf Earth.ova

Once downloaded you should have 3 files in the vulnhub directory.

output of ls command in vulnhub directory showing 3 files

Adding the VulnHub Machine to Proxmox

Now we need to create the VM in Proxmox to tie to the disk we downloaded. First we create the machine, and under operating system, select “Do not use any media.”

Create: Virtual Machine screen in Proxmox
Select OS screen to create Proxmox virtual machine

For the other options, I configured:

  • System: default
  • Disks: default
  • CPU: 1 socket / 1 core
  • Memory: 1024MB
  • Network: DMZnet / MTU: 1450

Once created, but before booting, the next step is to remove the hard disk. You do that by first detaching the existing disk, and then remove the unused disk.

Unused disk after detach in Proxmox

Now you import the disk, using the command below. Replace “115” with the number corresponding to your virtual machine in Proxmox, and the vmdk file with the correct file corresponding to the machine you downloaded.

# qm importdisk 115 Earth_dev-disk001.vmdk local-lvm --format vmdk
Example command output after importing disk.

Once the disk imports, you need to go back to the GUI and change the disk type to SATA in the Proxmox interface.

Change disk type to SATA in Proxmox

After changing, you should see the hard drive in sata0.

virtual machine hardware information in Proxmox

The last step is to change the boot order to boot to the hard drive first, and then start the VM. After starting you should get the login screen.

Change boot order to hard drive in Proxmox.
login prompt for VulnHub VM in Proxmox

The whole process is that simple. Now you can import new VulnHub machines anytime to try them out in your new Proxmox lab. Now that we have machines ready, its time to configure the DNS and firewall for external access.

Configure a SPAN port for Security Onion in Proxmox

The remaining server left to create in my lab is a Security Onion server. Security Onion is an out of the box blend of multiple open source tools that feed a central alert dashboard. I actually created a whole separate Pluralsight course called Security Onion Concepts and Basic Functionality if you are interested. The course covers the fundamentals, installation, and basic operation of the tool. This post is focused on capturing network traffic using my lab’s Security Onion server. To enable that, I configure a SPAN port for Security Onion in Proxmox on my pfsense virtual machine.

Configure physical NIC passthrough on the host

Prerequisite: Installing a Network Firewall Using Pfsense in Proxmox

My Proxmox lab has multiple hosts which significantly complicate this operation. The Security Onion server and pfsense firewall are located on separate hosts which means I have to pass the network traffic between hosts. I enabled this capability previously with software defined networking. There was a problem though, this does not support SPAN ports effectively. The best way I found to enable a SPAN port in Proxmox is to configure a physical NIC passthrough on the host. This allows me to assign a physical port on the host directly to a virtual machine which successfully passes all traffic.

The first step I took is to enable IOMMU in the /etc/default/grub file by adding the line below as seen in the image.

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
Image of /etc/default/grub file with line added
Enable IOMMU in Proxmox

Then I ran update-grub and reboot the machine. To check that the setting is correct after reboot, I ran the command below and looked for “IOMMU enabled”.

# dmesg | grep -e DMAR -e IOMMU
Example output of the dmesg command above
Example output of dmesg command

Next I added the required modules to enable physical NIC passthrough by editing the /etc/modules file and adding the modules in the image below.

Output of /etc/modules file with required modules added
Added required modules for physical NIC passthrough

Just to be safe, I rebooted the machine at this point and then checked that I could add a PCI device in the hardware setting in Proxmox.

Configure a SPAN port for Security Onion in Proxmox

Now that I can map a physical NIC to one of my virtual machines, I added one of the host ethernet adapters to the hardware on my pfsense virtual machine.

To make this work correctly on my old servers, I also had to enable unsafe interrupts with this command:

# echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf 
Example output of command to enable unsafe interrupts
Command run to enable unsafe interrupts

Now when I access the pfsense interface menu from the command line, the new interface is available as bce0.

Example output of pfsense screen showing new interface available.
New interface available in pfsense

I assigned the interface the name OPT2 and then in the web configurator I enabled it and gave it a description of SPANport.

pfsense web configurator interface configuration screen
Enabling the interface in web configurator

To configure a SPAN port in pfsense, you actually create a bridge within the interface menu. I went to Interfaces > Interface Assignments > Bridges and made a new one to create the SPAN port. I selected the LAN interface, added a description and then selected the SPANPORT interface under Span Port in the advanced configuration.

pfsense SPAN port bridge configuration
Adding a SPAN port in pfsense

To complete my lab setup I did the same thing for the DMZ network, leaving me with two bridges.

Bridges view in pfsense web configurator
SPAN ports in pfsense

The last step to verify operation is to test the SPAN port. To test, I used the packet capture tool in pfsense and set the interface to SPANPORT. I also enabled promiscuous mode to capture all data seen by the adapter.

Once I ran it, I could see multiple packets showing it was working as intended.

On the Security Onion server I will add another physical adapter, just like the pfsense machine. Then I will connect the interfaces directly with an ethernet cable.

Skipping ahead a bit to show it works

The output in Security Onion is not something I cover in the lab build, but it worked as configured in this post. Here is a tcpdump on my Security Onion Server and an overview of alerts.

I covered the how to install in a Pluralsight course, and you could also follow their documentation to build it. Security Onion is really one of the last steps to creating the basic structure of this lab other than adding the Kali machine and enabling remote access, but I also cover in the next post how to add VulnHub machines to the DMZ.

Join an Ubuntu Machine to Active Directory

The next step to finish off my client machine setup is to add my Linux machine to the domain. I am going to join Ubuntu to Active Directory so I can use the domain accounts to authenticate and login. Once joined, I login with my admin account to test. The first step is to prepare the client machine by setting the hostname and changing DHCP settings.

Preparing the Ubuntu Machine

Prerequisite: Creating a Domain: Installing Active Directory on Server Core

The first thing I need to do is change my Ubuntu machine’s hostname to a fully qualified domain name (FQDN). I used the command below to fix my machine’s hostname and then the next command to check it.

$ sudo hostnamectl set-hostname ubuntudesk1.corp.globomantics.local 
$ hostanmectl
Output of sudo hostnamectl set-hostname command
Changing my hostname using hostnamectl

Now that my hostname is fixed, the next step is to configure the DNS domain and set it to the internal Active Directory domain. You can make this change by adding the line to resolv.conf, but since I am using DHCP I set the search domain on my firewall which is my DHCP server. Both options are shown below.

Example content of resolv.conf file with search domain added
Changing resolv.conf to add the local domain
Adding search domain to pfsense DHCP settings
Adding search domain in pfsense DHCP settings

Now I check the status using resolvectl status to make sure the changes took effect.

Output of resolvectl status showing correct DNS domain
Checking search domain settings

Now that the networking is set up correctly, the next step on my client is to install the necessary packages. I used the apt command below to install everything I needed.

$ sudo apt -y install realmd libnss-sss libpam-sss sssd sssd-tools adcli samba-common-bin oddjob oddjob-mkhomedir packagekit

Now I am ready for the next step which is actually joining the Ubuntu client to the domain.

Join Ubuntu to Active Directory

Joining Ubuntu to Active Directory is a multi-step process where I will use the terminal. The actual domain join is a single command, but after that I am going to take some additional steps to set up the users. The first step is to use realm to discover and then join the domain. Realm discover is used to obtain information about the domain and also list the required packages to connect, which I installed already in the previous step.

$ sudo realm discover corp.globomantics.local
Output of realm discover command
Using realm discover to obtain information about the Active Directory domain

Since all packages were already installed, I can use realm join to join the domain, and then realm list to confirm.

$ sudo realm join -U Administrator corp.globomantics.local
$ realm list
Output of realm list
Output of realm list showing configured domain

Now I need to set up home directories, which I can do using pam_mkhomedir. I first used nano to edit mkhomedir in /usr/share/pam-configs. Following the Manpage, I decided to stick with the default umask and skeleton directory settings.

Example mkhomedir config
Edit mkhomedir in pam-configs

Next I entered the command below to update and set the options shown in the image. Then I resarted sssd after pam-path-update. After that, I set realm to enable everyone to login.

$ sudo pam-path-update
Output of pam-path-update
Pam-path-update settings
$ sudo systemctl restart sssd
$ sudo realm permit --all

Once everyone is configured to login the next step is to enable admins on my domain admin privileges on the Ubuntu machine. I set this in the sudoers file for the admin accounts in the image below.

Example sudoers file configuration
Enable admin accounts to have admin privileges

Now that everything is set up, I should be able to login with a domain account and if it is admin then sudo should work.

Testing Domain Login and Admin Access

To test login, I will use SSH to access the Ubuntu machine from my Windows 10 admin machine. If everything is set up correctly I should be able to SSH without specifying a login name from Windows 10 while logged in as BAdmin, and then enter a sudo command.

Example SSH from Windows 10 to Ubuntu machine
Login to Ubuntu machine using SSH from Windows 10 PC
Output of sudo apt update
Output of sudo apt update

That’s it! My configuration is successful and I can login to my Ubuntu machine using domain credentials. My client machines are all set up correctly and ready for testing. The next step in my process is to install and configure Security Onion which is the topic for the next series of posts.

New User Machines: Creating Windows Clients

In this post I will go over the creation of a Windows client machine with Windows 11 as an OS. I also added the PC to the Globomantics domain. Once I add the machine to the domain, I installed a Thunderbird email client and configured it to connect to the iRedMail server. I also followed the same procedure on a Windows 10 machine, which is very similar so I will focus on the Windows 11 PC for this post.

Adding a Windows Client Machine

Prerequisite: Bulk User Creation with PowerShell

The first step to creating a client machine in my Proxmox lab is to clone the template machines. I used the full clone mode in Proxmox. After creating the machine I started it up and chose the correct region, keyboard layout, and accepted the license agreement. For sign in options I selected domain join instead.

Full clone mode screen in Proxmox
Example full clone menu

I configured the Windows 11 PC for my user Jane Johnson created in the previous post. Once it loaded, I went to settings to change the computer name and add it to the Globomantics domain.

Settings menu in Windows 11 to domain join a PC
Settings menu in Windows 11 to change computer name and add to domain
Changing the computer name and adding to the domain
Changing computer name and adding to domain

After adding to the domain I restarted the new PC. Then I could login as Jane Johnson to the CORP domain.

Windows 11 domain login
Logging in to the CORP domain on Windows 11 PC

Configuring the Thunderbird Email Client

Once logged in, I can access the Globomantics share created previously. I downloaded the Thunderbird client installer on my Windows Admin PC and uploaded it to the share. On each client I just run the installer from the share.

Windows Explorer used to upload a file to a network share.
Uploading the Thunderbird email client to Globoshare
Running a file from a network share
Running the email client from the Globoshare

After Thunderbird finished installing I configured it on each machine to map to the email accounts created in the last post. Part of that involved configuring the connection manually to iRedMail. An example of the configuration I used is in the image below.

Thunderbird email and password entry
Thunderbird email and password entry
Manual configuration of IMAP settings in Thunderbird
Manual configuration of IMAP settings in Thunderbird

Once configured, the client will connect to the server. My email server is using a self-signed certificate, so I had to confirm a security exemption.

Confirming security exemption for self-signed certificate in Thunderbird
Confirming security exemption for self-signed certificate

After that Thunderbird loaded the inbox so I could send a test email between the user accounts to confirm my email server works.

Inbox on first login to Thunderbird
Inbox on first login to Thunderbird
Test email between John Smith and Jane Johnson
Test email between John Smith and Jane Johnson

Again, because of the self-signed certificate I had to confirm another security exemption.

Successful test email received

The test email succeeded, finishing up my internal configuration and user services! The domain is ready for internal use at least and all services are working. The next steps are finishing up my clients, installing Security Onion, and configuring the firewall to enable external traffic into the DMZ servers.

Bulk User Creation with PowerShell

Now that I have a domain and working Email server, its time to create some users for the network. I could create these individually using the Active Directory Users and Computers interface in RSAT, but instead I am going to use PowerShell to script the process. User creation with PowerShell is an easy process and pretty straightforward. I will make a CSV file with the account information, and then use PowerShell to create the accounts quickly.

User Creation with PowerShell

Prerequisite: Creating a Domain: Install Active Directory in Server Core

Creating a user with PowerShell is done using the Active Directory module for PowerShell to import the needed command. From there, it’s as easy as using the “New-ADUser” command to add a user. I am going to create a script that first reads a CSV file containing the user information, and assigns each piece to a variable. Then the script will check if the user already exists, and if it doesn’t create it using “New-ADUser”.

Here is the PowerShell script I wrote for this:

Import-Module activedirectory

$NewUsers = Import-csv C:\Users\BAdmin\Desktop\Users.csv

foreach ($User in $NewUsers) {
$username = $User.SAMAccount
$password = $User.password
$firstname = $User.FirstName
$lastname = $User.LastName
$email = $User.email
$title = $User.title
$department = $User.department

if (Get-ADUser -F {SamAccountName -eq $username) {
Write-Warning "User already exists with that name."}

else {
New-ADUser -SamAccountName $username `
-UserPrincipalName "$username@globomantics.local" `
-Name "$firstname $lastname GivenName $firstname `
-Surname $lastname -Enabled $True `
-DisplayName "$lastname $firstname" -EmailAddress $email `
-Title $title -Department $department `
-AccountPassword (ConvertTo-SecureString $password -AsPlainText -Force) `
-ChangePasswordAtLogon $True
}

}
Bulk user creation with PowerShell script reading CSV file
PowerShell Script in the PowerShell IDE

I also made the CSV file “Users.csv” and stored it on the Windows 10 Admin PC desktop where I wrote this script. The CSV has these headings matching the script’s variables: SAMAccount, password, FirstName, LastName, email, title, and department.

Example CSV file for bulk user creation with PowerShell
Example CSV file used for my domain

After creating both of these files, making the user accounts is as easy as running the script. After that the user accounts will show up in Active Directory.

Active Directory Users and Computers
New user accounts populate in Active Directory after running the script.

Adding User Mailboxes

Prerequisite: Enable Email Services: Configure DNS for Email and Testing

Using the iRedMail server admin page, making mailboxes for the new users is also a very easy and straightforward process. From the admin page, add a mail user and then fill out their required information.

Example adding user account in iRedMail server
Example creation of the mail user account for Bob
View of all mail users for the globomantics.local domain
All user mailboxes for the Globomantics domain

Once all the users are added, you can login to any of the domain workstations and then use either the webmail interface or an email application to check their email. In my next post I am going to create the client machines and use Thunderbird to connect to the iRedMail server.

Enable Email Services: Configure DNS for Email and Testing

My next step in adding Email services to my lab domain is to configure DNS for Email. This involves adding the required MX, CNAME, and A records on the Pi-Hole so external email traffic can route correctly to the Globomantics domain.

Configure DNS for Email

Prerequisite: Create My Own Email Server: Install and Configure Email on an LXC

To enable Email services, I first need to add an A record for my new server. I used the Pi-Hole admin panel to create this record mapping the name “mx” to 10.10.1.5. After creation the new record is visible in the admin panel.

Local domain list in Pi-Hole with new record for host mx
Pi-Hole admin panel after creating the new A record

Next, I am going to add a CNAME to my DNS configuration that creates an alias for mx.globomantics.local. The alias I am assigning is mail.globomantics.local. I am using the Pi-Hole admin panel again for this change.

Adding a CNAME record to Pi-Hole
Adding a CNAME record to Pi-Hole

Last I need to add an MX record which directs mail to my new Email server for the Globmantics domain. In the Pi-Hole configuration panel there is no option to add an MX record, so I need to add a custom file to dnsmasq. To create the file, I use the command below and then edit using Nano.

# touch /etc/dnsmasq.d/99-mail.conf

Within the file, I add this line to create the MX record mapping incoming mail destined to the globomantics.local domain to my MX host.

mx-host=globomantics.local,mx.globomantics.local,1
Configure DNS for email, dnsmasq MX record line
Custom file with MX record for dnsmasq

Once created, the last step in DNS configuration is testing that the records resolve correctly. I used nslookup on the Windows 10 Admin PC. Using the commands below queries MX records tied to the globomantics.local domain.

C:\Users\BAdmin>nslookup
> set q=mx
> globomantics.local
Output of nslookup command query for MX records
Output from nslookup shows that the domain MX record resolves correctly

A Quick Firewall Change

To enable the server to send outbound email I need to open a few firewall ports on the DMZ. Reading through the iRedMail configuration, the minimum ports I need open are 25 (SMTP), 587 (Submission), and 143 (IMAP). I added those in the same method used previously.

pfsense firewall changes to enable required services
Added ports 25, 587, and 143 to DMZ interface on pfsense

Testing Login to Postmaster Account

Now that DNS and firewall configuration is done, I can test logging in to the Postmaster account and check my email. I am going to login to the Roundcube webmail by navigating to the alias address: mail.globomantics.local and then logging in as Postmaster.

Roundcube login page
Roundcube login page loaded after navigating to MX alias
Postmaster mailbox view on initial login
Successful login to Postmaster account and inbox view in Roundcube

That’s all there is to it! My server is ready for user Email. That is the topic for my next post where I will create users in bulk, and create their mailboxes.

Create My Own Email Server: Install and Configure Email on an LXC

The next step to creating my lab is enabling email services. I decided to create my own email server by installing the open source email server iRedMail on an Ubuntu LXC. I chose iRedMail because of its simplicity to install, configure, and operate. This allows me to add email capabilities to the domain without purchasing software and without adding too much complexity to the build. I chose an LXC to minimize resource use.

Create My Own Email Server on an LXC

Prerequisites:

I used the same process as the Pi-Hole install to create the LXC in Proxmox. The specifications for the Email server are:

  • Name: mx
  • Template: Ubuntu 20.04 standard
  • Disk: 20GB
  • CPU Cores: 1
  • Memory: 1024MB / Swap: 1024MB
  • Network: DMZ-net with static IP address and MTU of 1450
  • DNS: DMZ Pi-Hole IP

After installing I ran the commands below to update, upgrade, confirm the hostname, and install dependencies.

# apt update && apt upgrade -y
# hostname -f
# apt install gzip dialog

Next, I downloaded iRedMail, renamed the file, uncompressed it, and ran the shell installer with these commands.

# wget https://github.com/iredmail/iRedMail/archive/tags/1.6.2.tar.gz
# mv 1.6.2.tar.gz iRedMail-1.6.2.tar.gz
# tar zxf iRedMail-1.6.2.tar.gz
# cd ./iRedMail-1.6.2.tar.gz
# bash iRedMail.sh

After running these commands, the installer will start.

iRedMail installer screen
iRedMail installer screen

When installing iRedMail I selected the following options:

  • Mailbox store: /var/vmail
  • Web server: Nginx
  • Backend: Postgres
  • Mail domain name: globomantics.local
  • Optional components:
    • Roundcube
    • netdata
    • iRedAdmin
    • Fail2ban

Then I just had to confirm the settings to install it.

Confirming the options and installing
Confirming options and installing

After the install finishes, I got a screen with the admin web page addresses and postmaster login information. I chose the next recommended option of reading the /root/iRedMail-x.y.z/iRedMail.tips file for more information. After that I just rebooted the server to enable email. The next step is to configure DNS for email services on my domain and login to test the email. That is the topic for my next post.

Making GPO Updates and Changes with Remote Server Admin Tools

The next step to getting my lab ready is to make some updates to my servers and computers. The fastest way to update all of my machines is to update the group policy objects (GPOs). After I update the GPO policies I am going to check the DNS configuration an make sure my domain is ready for the remaining workstations.

Updating GPO policies

Prerequisite: Creating a File and Windows Server Update Services (WSUS) Server

I am going to use group policy management console (GPMC) to update GPO policies for my domain. The first policy change I need to make is to finish enabling WSUS on my domain by mapping all of the workstations and servers to WSUS for updates. To make this change I am going to make a new GPO for the domain called WSUS settings in GPMC.

Create a new GPO in this domain
Create a new GPO in this domain and link to domain
Editing the new GPO
Editing the new GPO

Now I go to Computer configuration > Policies > Administrative Templates > Windows components > Windows Update > Specify intranet Microsoft update service location.

Configure specify intranet Microsoft update service location
Selecting the policy to update

First, I map the domain servers and workstations to my WSUS server, I need to enter the address for my WSUS server. Then I enable the policy so it takes effect.

Add the web address for WSUS to the right locations in the policy
Adding the web address for my internal WSUS

Next, I need to configure the schedule for automatic updates. This sets all the machines to download updates from the internal WSUS and selects the desired behavior. I set the Configure Automatic Updates setting to make this change.

Configure automatic updates in GPO
Setting automatic updates in GPO

Update Policy for Add/Remove Features

I need to make one more change related to WSUS. Right now WSUS is only used for software updates, but I also need to map workstations and servers to WSUS for adding and removing features. That is in Computer configuration > Policies > Administrative Templates > System > Specify settings for optional component installation and component repair.

Mapping add/remove features to internal WSUS
Mapping add/remove features to internal WSUS

That’s it for the WSUS GPO. Now my workstations will map to the internal server for updates and features. Next I need to check DNS settings. I am going to use the DNS manager in RSAT for this.

Opening screen of DNS manager
DNS Manager for my domain

First, I am going to set the server to only listen on the IPv4 interface for DNS queries. Then I am going to check the DNS forwarders and make sure it is using the Pi-hole in the DMZ.

Selecting interfaces for DNS listeners
Selecting the IPv4 interface for DNS listeners
Checking DNS forwarders
Checking DNS Forwarders

My DNS is configured and working as expected. Now that my domain is ready for workstations. Before I add workstations though, I am going to add one more service: Email.

Creating a File and Windows Server Update Services (WSUS) Server

In this step I am creating a new server using the Windows Server 2019 template created earlier in this process. The first role is providing my network with shared storage by creating a file server, then adding Windows Server Update Services (WSUS). I can install the basic services with PowerShell and use the remote administration tools on my Admin PC to manage WSUS.

Creating a File Server

Prerequisites:

The first step is to clone the Windows Server template into a new VM that I am going to call FileSrv. After cloning, I just start the machine and change the Administrator password.

Changing the administrator password after reboot
New server started, changing the admin password

Joining the domain in Server Core is easily done using SConfig. I changed the name on the server to FileSrv and joined the CORP domain in my lab. I also checked the network settings to make sure everything is correct.

Renaming the server in SConfig
Rename the server in SConfig
Checking the IP address in SConfig
Checking IP address in SConfig
Joining the corp.globomantics.local domain using SConfig
Joining my lab network domain from SConfig

Once the server reboots, I logged in with the Bob Admin credentials and prepared to install the File Server role. Installing the file server role is done through Powershell using the command below.

PS C:\Users\BAdmin> Install-WindowsFeature File-Services
Feature is installing, shows install progress
Installing file services feature
File Services feature successfully installed
File services successfully installed

Next I had to create a shared folder and then set the permissions using PowerShell. Here are the commands.

PS C:\Users\BAdmin> md C:\Globoshare
PS C:\Users\BAdmin> $acl = get-acl C:\GloboShare\

PS C:\Users\BAdmin> $ace = new-object system.security.AccessControl.FileSystemAccessRule('Authenticated Users', 'Modify', 'Allow')

PS C:\Users\BAdmin> $acl.AddAccessRule($ace)
PS C:\Users\BAdmin> $acl|Set-Acl

Finally I could share the folder using New-SmbShare.

PS C:\Users\BAdmin> New-SmbShare -Name Globoshare -Path C:\Globoshare -FolderEnumerationMode AccessBased -CachingMode Documents -EncryptData $True -FullAccess Everyone

Now the File share is available on the entire domain using \\Globoshare\ to access.

Installing Windows Server Update Services (WSUS)

Installing WSUS on Server Core is a little more complicated than configuring a shared file. For this role, I will need to use a combination of PowerShell commands and the admin tools on my Win10Admin PC. The first step is installing the Windows Feature in PowerShell.

PS C:\Users\BAdmin> Install-WindowsFeature UpdateServices -Restart

After installing, I need to run a post install task using this command.

PS C:\Users\BAdmin> "C:\Program Files\Update Services\Tools\wsusutil.exe" postinstall CONTENT_DIR=C:\WSUS

Next I need to enable remote administration of this server. That involves adding Web Management Service, along with ensuring it starts up automatically on reboot. I also need to make a registry change to enable remote management.

PS C:\Users\BAdmin> Install-WindowsFeature Web-Mgmt-Service
PS C:\Users\BAdmin> reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server /v EnableRemoteManagement /t REG_DWORD /d 00000001
Enable remote IIS management using PowerShell
Enable remote IIS management using Powershell
PS C:\Users\BAdmin> Set-Service WMSVC -StartupType "Automatic"
PS C:\Users\BAdmin> Start-Service WMSVC

From here the remaining steps are done in the remote admin tool. I followed the prompts and selected Microsoft Update as my upstream provider. This step took a long time for the initial sync.

Starting screen of WSUS after configuring remote management.
Starting screen after configuring remote management
Download update information from Microsoft Update
Connect to Upstream server

After that I selected the language and products that apply to my domain, along with the classifications. Then I configured the Synchronization schedule.

Setting WSUS to synchronize manually
Synchronize Updates Manually in WSUS

After finishing I chose to begin the initial synchronization.

Begin initial synchronization in WSUS.
Begin initial synchronization in WSUS

With that, my new server is ready to go. I installed file services, created a network share, and configured WSUS so my machines have a single update server where I can approve each update. Next I have some group policy changes and other domain changes to make from RSAT on the Windows 10 Admin PC.

« Older posts