This repo will hold the basic information and documentation around the digital and physical assets and projects for the AniNIX network.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 

6.4 KiB

The Forge2 is the primary hardware platform on which the AniNIX runs.

Etymology

The Forge2 the second Forge build, the original having been two towers instead of one.

It is so named because the exterior is solid black with soft red LED's internally -- this creates an appearance similar to a furnace.

The Forge builds are also so named because projects are created, developed, and tested in these frames.

Capacity and Components

  • 6-core hyperthreaded core i7 at 3.4GHz, water-cooled by Corsair H100i two-fan cooler
  • 24 GB RAM
  • 13.2 TB onboard storage. One hotswap slot open.
    • 60 GB solid state boot drive for Windows 10 Pro Hypervisor (Hyper-V)
    • 1 1TB drive dedicated to additional user space and VM's
    • 1 2TB drive dedicated to Windows data formatted as NTFS
    • 1 2TB drive dedicated to Windows Backup formatted as NTFS
    • 2 2TB drive dedicated to Core -- see Core for the filesystem hierarchy there.
    • One hotswap bay for Aether backups.
    • 1 150GB drive for the DarkNet VM
  • USB 2.0 & 3.0 and eSATA slots
  • 2 10GB NIC's -- one for VM's and one for Windows
  • Bluetooth Adapter
  • Hyper-V virtualization under Windows 10 ProCategory:Microsoft
  • 1200W Corsair power supply
  • EVGA x79 Dark motherboard with PCI-e SATA extender
  • SLI'ed GTX 760 GPU's with 4GB onboard cache each
  • Corsair K70 Keyboard w/ red LED and Corsair M65 mouse.
  • CyberPower UPS Category:Corsair Category:EVGA Category:Intel Category:Seagate Category:Kingston

Hosted Services and Entities

{{Reference|Core}}{{Reference|Windows}}{{Reference|DarkNet}}

Connections

{{Reference|Infrastructure}}{{Reference|Shadowfeed}}{{Reference|Core}}{{Reference|Windows}}

Additional Reference

A gallery will be added. Category:TODO }}

Hypervisor Notes

Hyper-V integrates VM's with Windows, allowing VM's to be started at Windows boot, providing direct disk access, and managing assignment of cores, memory, and disk.

ShadowArch guests with a GUI should include xf86-video-fbdev and set GRUB_CMDLINE_LINUX_DEFAULT="quiet video=hyperv_fb:1920x1080" to get maximum screen resolution.

Hyper-V comes with a few limitations. PCI and USB devices can't be passed through without 3rd-party software, but this was considered acceptable.

Hyper-V guests require significant configuration to prevent performance problems. Dynamic memory should be disabled to prevent a guest from overrunning the host. Data Exchange, Backup, and Guest Services should all be disabled from integration services. Disable checkpoints. Automatic start action should either be on startup or disabled, and automatic stop action should always be poweroff.

Hyper-V itself also requires configuration of the Windows host. The default High Performance power profile turns off monitors when not in use but does not put the entire frame to sleep -- this is the desired behavior.

Antivirus

Make sure Hyper-V, if using VirusScan follows the VirusScan#Hyper-V.

Presently, this still caused drops in virtual disks, crashing several VM's, so we are suspending antivirus on the hypervisor, along with most general-purpose browsing. Read the following for other user experiences.

  1. https://www.cnet.com/how-to/i-dont-use-anti-virus-software-am-i-nuts/
  2. https://www.reddit.com/r/windows/comments/41b0k0/is_antivirus_software_still_necessary_for_windows/

Windows Update

The Windows Update service, if it deems the system too out of date or in need of critical fixes, may forcibly restart the system. We recommend keeping the Windows Update service disabled on hypervisors until patching is desired. This can be done in services.msc.

We recommend addtionally setting the "gpedit.msc > Computer Configuration \ Administrative Templates \ Windows Components \ Windows Update \ Configure Automatic Updates" option if you have a Pro or Enterprise license.

Sleep Mode

Sleep mode, even immediately interrupted, has been observed to break network connectivity and VM uptime. When running as a Hypervisor, it is advisable to disable sleep and hibernate modes. Change these from Group Policy under Administrative Templates>System>Power Management>Sleep Settings. Enable "Turn Off Hybrid Sleep" and disable "Allow Standby States (S1-S3) when sleeping".

Previous Hypervisors

VirtualBox

Oracle VirtualBox is a free hypervisor that can run on almost any OS. This makes deployment and device driver management entirely on the stock OS, which was Windows in our case thus alleviating driver problems. Management is also easy, particularly with an admin account, so it's easy to assign cores, memory, and such to a VM. VirtualBox can assign raw disk access with VBoxManage. Use Windows Disk Manager (diskmgmt.msc) to identify the disk. In the case below, 7 is the disk number.

"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" internalcommands createrawvmdk -filename "C:\Users\Admin\VirtualBox VMs\raw7.vmdk" -rawdisk \\.\PhysicalDrive7

VirtualBox was dropped due to buggy integration with the running OS and the inability to start VM's at OS Boot.

ArchLInux/KVM-enabled KVM

The Forge2 frame has a 60GB SSD installed for KVM-enabled QEMU virtualization inside a minimal ArchLinux host. This implementation allows passing any host resources to the guest, including USB and PCI devices which is an advantage over other Hypervisors.

While Intel VT-d provided by the motherboard ostensibly supports this passthrough, it had hardware caps on the x79 that the AniNIX could not afford (4 hard drives, 1 CD drive) without disabling KVM, and the network bridging created problems for VPN clients.

Alternatives

You could in theory put the hardware for an AniNIX network clone in the Cloud. There are steps to set up ArchLinux in [http://codito.in/archlinux-on-azure/ Microsoft Azure] and [https://bbs.archlinux.org/viewtopic.php?id=186707 Google Cloud]. This may be advantageous for sites that have uptime concerns, low local resources, or physical security concerns.

From a cost perspective, power and network for a Forge2 and Shadowfeed costs roughly $100 per month with a $6000 buy-in. Equivalent cloud solutions would need to supply at least one full backup image with highly available power and network, along with Forge2#Capacity and Components.

You should look at Aether notes on cloud computing if you consider this as an option.