Project-FiFo Blog

Articles and Blog Posts related to Project FiFo

  • HOME
  • BLOG INDEX
  • CONTACT

  • HOME
  • BLOG INDEX
  • CONTACT

FiFo.cloud VPN – automatic multi cloud overlay networks

April 11, 2018 By Heinz N. Gies

Let’s talk about Virtual Private Networks in hybrid cloud environments. While working on a big project, like FiFo.cloud (and VPN), we always try to dogfood our solution. This practice helps to focus on the real problems and not wander off into the land of feature creep. As part of this, we are running systems on Packet, DigitalOcean, OVH, our test lab and at home.

Now the experience to manage systems that spread out from one place is quite lovely. That said we identified one slightly annoying limitation: when creating zones in different areas connecting them is a huge pain.

The pain of multiple segregated networks destroyed the experience we want to deliver – seamless integration of a multi-cloud or hybrid environment. If you have to setup routes, and forward ports it is not the experience we want to provide or use ourselves.

FiFo.cloud VPN

Enter FiFo.cloud VPN! Thinking about how to solve these problems we looked at a few solutions and weighted pros and cons of the different possibilities. In the end, we decided that a mesh VPN is pretty close to what we would want ourselves. If anyone has ever set up a VPN, they know that it is not exactly something that falls into the category of “fun” (spoiler: it is not). Instead, it is boring, error-prone, legwork – so we automated it. FiFo.cloud can create a full mesh VPN over all your host. All you have to do is activate the VPN feature and select a host to enable!

The design

fifo.vpn design

The Fifo.cloud VPN is split into two separate planes: control and data. Second, it keeps the data close to your hosts as packets. We can take the direct route between them instead of having to go through a VPN endpoint. Not only does that keep traffic down it also optimizes latencies. In other words, the network performance, both throughput, and latencies are solely dependent on the connection between your hosts not on our network. Last but not least it allows us to keep the costs down.

However, this is not all! You are not merely limited to a VPN connecting your hosts. With FiFo.cloud you can now create overlay networks that span this VPN making it one big transparent network. As a result, it is possible to treat a multi-cloud or hybrid-cloud model as one big installation.

Limitations

We believe we should never just mention ‘the good.’ In the end, all technological decisions are tradeoffs, and it is important to be clear about them. So let’s take a moment to look at what the FiFo.cloud VPN is not.

First and foremost FiFo.cloud VPN is not a consumer product. The VPN is a tool for experts specifically designed to server a need: interconnection in hybrid or multi-cloud setups. In the same sense, it is not a privacy tool, the mesh design and the fact that we do not route traffic means that connections between your hosts are connections between your hosts. Moreover, while links are encrypted to provide security against eavesdropping the entire connection metadata remains intact.

Coming right back to the last topic, the FiFo.cloud VPN does not route traffic. As a result, we do not provide a gateway or exit nodes on it. You can of course setup your gateways for overlay networks, but that is something that has to happen in your own infrastructure.

Throughput is another thing to keep in mind. Using a VPN will reduce the throughput compared to an unencrypted connection. While it might not make that much difference in a multi/hybrid cloud setup, remember: if it is crucial, benchmark it yourself. However, again given the decentralized architecture no single choke point needs to serve the full throughput of the network. The nodes talking to each other is the determining factor for network speed.

Last but not least you are responsible for your traffic. With the FiFo.cloud VPN we do not provide dedicated interconnects between cloud providers! In other words for all traffic generated on your VPN, the normal ingress and egress rates of your providers will apply.

Interested? Say Hi!

At the point of writing this the FiFo.could VPN is in a closed beta.  Internally we already use the FiFo.cloud VPN with great success. However, we want to make sure edge cases are covered before rolling it out to the broad feature. So if are interested and want to give it a try, or talk to us about it – let us know and give it a try (mail: to support at project-fifo dot net or use the little ‘Help button’ on the bottom of fifo.cloud)

Filed Under: Project-FiFo

fifo.cloud – the problem we solve

March 1, 2018 By Heinz N. Gies

When I design and write software, one of the most important things to me is that it solves a real problem. This has been a theme with everything Project-FiFo has released so far, any system that we put out there solves a real problem.

FiFo itself was built out of the need to have a way to manage a SmartOS hypervisor, at that point, SDC was still proprietary and no other system existed at all. DalmatinerDB exists out of the need to have a clustered high-performance metric store, an area where it still excels despite all the movement in the field over the last years.

So where did FiFo.cloud come from? It comes from what the cloud ecosystem has become to be. It is no longer a monolithic place. It is no longer AWS or nothing. Today there is a myriad of cloud providers and users start to avoid putting all their eggs in one basket and home their provider miraculously is the one that does not have issues at one point or another. Couple this with the in-house hardware and the occasional co-located server and we end up with an infrastructure mess that is worse then what we had before the cloud area.

Project-FiFo itself uses four different cloud providers, has a test lab and some co-located systems. I’ll be blunt here, it’s no fun to deal with so many different things. Even with automation, it’s a mess, make sure everything is in sync, documented, different portals, ssh all over the place.

This is where fifo.cloud comes into play, what fifo.cloud allows you is to pull all those different hosts into one single place. Put a fifo.cloud agent on the host, or VM or hell the desktop under your desk and you can manage it from one location. It doesn’t matter anymore where the system sits, or how many different providers you use, if you have an in-house lab, or run dedicated servers in your own rack, it all is the same. Everything is reachable with just a few clicks.

Filed Under: Project-FiFo

Fifo.cloud Network scripts

January 17, 2018 By Kevin Meziere

As we continue to roll out features to Fifo.Cloud this week we would like to show off Network VM State Scripts. When we started with the idea of fifo.cloud we wanted a solution where people could place native containers in multiple data centers. One of the complications of running containers in a public cloud environment is that the user often does not control the network, and many times network controls are in place to require the use of APIs in order to activate additional IP addresses and connections. In this post, we will show how a fifo.cloud user can Network VM State Scripts to make API calls and setup routing in a public cloud environment.

Throughout this post, we will use Packet as an example service provider, but the concepts should apply to other cloud providers as well.

Like other many cloud providers, Packet.net provides the ability to assign additional IP addresses via Elastic IPs. When Packet Elastic IPs are assigned to a host Packet directs all traffic for that IP to the network interface on that host. The trick to using this setup is that all traffic outgoing must be routed through the management IP address. Attached is a diagram showing the necessary layout.

When an elastic IP address is assigned to a host, traffic for that IP is passed to the primary interface. Routing inbound traffic from requires adding a route to the IP of the container to the bridge interface that the VM is connected to. Outbound traffic requires the default gateway of the VM being an IP of the bridge. The host must then have routing enabled so that traffic routed through the bridge IP is then routed out the default interface of the host.

All of this requires several steps, which would be rather manual. Luckily Fifo.Cloud has added network create and delete scripts. These scripts are run for each network interface that is created on a VM, allowing for any of this sort of routing additions or deletions. In a customer-owned network create and delete scripts could be used for setting up SDN routes, starting BGP announcements, or anything else you can imagine.

We have put together a how-to for using network create/delete scripts at https://docs.fifo.cloud/howtos/public-cloud-ip/ . While these directions are specific to Packet, the basics will apply to other providers.

Every new feature we roll out brings us closer to a production release of fifo.cloud, and we can’t wait to see what you will build with it!

 

 

Filed Under: Project-FiFo

try.fifo.cloud open for all

January 12, 2018 By Heinz N. Gies

While the article below is still true to in it’s spirit the free accounts have been integrated with fifo.cloud, the try instance is no longer needed.

Today our hosted management platform for SmartOS and FreeBSD – fifo.cloud – is leaving it’s closed alpha stage. As part of this, we’re opening up try.fifo.cloud. We would like to use the chance to share a few words about what and why try.fifo.cloud is.

When we started working on fifo.cloud, it was very important to us is giving back to the communities that form the foundation on that we are building the software we love possible. To this end, we decided early on to offer free accounts to enthusiasts and home users.

Out of this try.fifo.cloud was born. It offers a free single-host account to home users and everyone who just wants to run a small system or just toy around. During the alpha, we decided to keep a waiting list to ensure we have some control over signups, now with the alpha done we want to open this up.

try.fifo.cloud offers the same set of features as our small plan with the only difference that it only supports a single host. We hope that it will be useful for home users that run a SmartOS or FreeBSD host at home and just want a simple UI to manage it. To ensure it is truly useful we do not limit the number of zones, jails or KVM’s run on the host and it offers the same API as paid plans.

So if this peaked your interest, head over to try.fifo.cloud and sign up and play around with it. It’s free and no strings attached. And again, thank you to all the people in the greater SmartOS and FreeBSD community that makes projects like FiFo, and fifo.cloud possible!

Filed Under: Project-FiFo

SmartOS on Packet.net

January 5, 2018 By Heinz N. Gies

There is a wonderful blog post about how to get a PXE server up and running to boot SmartOS. In this post I want to explain the next step, getting the booted SmartOS box fully functional.

As a start we provide a fully setup PXE server which everyone is welcome to use, you can use the boot script if you want to skip over the setup. Of cause if you rather set up your own you can just follow the instructions in the blog post and replace the iPXE script with the following:

#!ipxe
dhcp
set base-url http://pxe.fifo.cloud
kernel ${base-url}/smartos/platform/i86pc/kernel/amd64/unix -B smartos=true,console=ttyb,ttyb-mode="115200,8,n,1,-"
module ${base-url}/smartos/platform/i86pc/amd64/boot_archive type=rootfs name=ramdisk
boot

Creating the Host

Note, this is currently tested on x1.small and t1.small.x86 instances. Instances with Mellanox cards are not supported by SmartOS!

Now with that prepared the next step is to set up a server on packet, select ‘Custom iPXE’ as your OS and put in either our ixpe url or your own.

 

Before you continue with the installation press the ‘Manage’ button to go to the advanced settings. Scroll down all the way to the bottom and select ‘Persist PXE …’ – that way the server will keep botting via PXE after the installation

 

The server will then start to provision and after a while, you will be able to go to the details page. Find the buttonand click it to get an ssh command to connect to your server’s console.

While you are here you can note down the private (2) and public (1) IPs and gateways for your server as you’ll need them in the next step.

 

Installing SmartOS

Once you connected to the servers console the usual SmartOS installer will greet you. When going through it answer the questions as follows:

  • Admin Interface: 1st nic
  • Admin IP: Private IP (2)
  • Admin Netmask: 255.255.255.240 (this is needed to be .240 no matter what packet says!)
  • Admin Gateway: Private gateway (2)
  • Headnode Gateway: Public Gateway (1)
  • NTP, DNS, Disk: defaults (or adjust as desired), it will not be able to connect at this point!

A quick word to the netmask, packet hands out /31 ranges which ifconfig does not swallow on its own. We will use a setup service at the end of this tutorial to set it properly but the configuration needs to be set to 240 to ensure everything boots fine.

 

This is what the result should look like, you can save this and reboot.

Configuration

Once rebooted we will edit the /usbkey/config file and enter the values as follows:

admin_nic=<mac of 1st nic>                # this does not need to be changed
admin_ip=<admin ip from 2>                # take from the vms networks
admin_netmask=255.255.255.0             # this needs to be 240!
admin_gateway=<admin gateway from 2>      # take from the vms networks

external_nic=<mac of 1st nic>             # same as admin_nic
external0_ip=<public ip form 1>            # take from the vms networks
external0_netmask=255.255.255.0         # this needs to be 240
external0_gateway=<public gateway>        # take from the vms networks
headnode_default_gateway=<public gateway> # take from the vms networks

dns_resolvers=8.8.8.8,8.8.4.4             # chang if you like
dns_domain=local                          # can be changed

ntp_hosts=0.smartos.pool.ntp.org          # can be changed

hostname=smartos-test                     # use your own hostname

Note: For x1.small instances instead of the mac of the nic you can use an aggregate by adding the following line and use aggr0 instead of the mac in the _nic= lines. You can get the secondary mac by running: dladm show-phys -m

aggr0_aggr=<mac of nic 1>,<mac of nic 2>

To bring up networking we’ll need another reboot at this point.  This is a good time to add additional public networks to your server if you want zones with public IPs to your server in the portal.

Once the server is rebooted we have a packet setup service that will grab some information and do the last bits of setup.

First set the netmask to waht packet expects:

ifconfig igb0 netmask 255.255.255.254
ifconfig external0 netmask 255.255.255.254

To install the script you can run:

curl -ks http://pxe.fifo.cloud/tools/install.sh | bash

or download the file and inspect it to be sure you want what it. The script will:

  • install the SSH keys you asked Packet to add to the server
  • correct the netmasks on the external and public interface
  • set up routes for additional networks

A word to gateways, we’re setting up an additional interface with the IP 192.168.255.255 that functions as a gateway for public-facing zones, the setup script will in return set up all routes for this gateway.

The first zone

To create the first zone we import a dataset:

imgadm import 23b267fc-ad02-11e7-94da-53e3d3884fe0

Then write a zone description, use a public IP from your additional rage you added earlier.

{
 "autoboot": true,
 "brand": "joyent",
 "image_uuid": "23b267fc-ad02-11e7-94da-53e3d3884fe0",
 "delegate_dataset": true,
 "max_physical_memory": 1024,
 "cpu_cap": 100,
 "alias": "test",
 "quota": "5",
 "resolvers": [
  "8.8.8.8",
  "8.8.4.4"
 ],
 "nics": [
  {
   "interface": "net0",
   "nic_tag": "admin",
   "ip": "<additional public ip>",
   "gateway": "192.168.255.255",
   "netmask": "255.255.255.255",
   "primary": true
  }
 ]
}

you can then create the zone with the command

vmadm create -f /opt/zone.json

Due to a bug in vmadm you need to run two more commands to configure routing. Log into the zone and execute the commands:

route -p add 192.168.255.255 <zone ip> -interface
route -p add default 192.168.255.255

That’s it, your server is now set up and your zone should be able to reach the internet and be reachable from the internet.

Filed Under: Project-FiFo Tagged With: Cloud, SmartOS

FiFo.cloud – the everywhere console

November 28, 2017 By Heinz N. Gies

This week we added support for the jail and zone console along with VNC for KVMs in FiFo.cloud. While that is interesting, there are some other fascinating aspects to it. One of the primary goals is allowing to run FiFo.cloud agents everywhere. No matter if your server is at home sitting under your desk, in a data center or a virtual machine at a cloud provider. We want you to be able to connect it to FiFo.cloud.

With console support, this means you can reach your zones and jails from everywhere that has internet access and a browser. That aside let’s talk a bit about how we do that.

 

For this networking is the most significant issue. We know that inbound connections are complicated at best and impossible at worst. So that was not an option. Instead, we decided to do the least invasive thing possible. Each fifo-agent opens a single TCP/TLS connection.

Then again being restricted to a single outgoing connection comes with its own problems especially when you require multiple bidirectional connections. To work around this limitation, we multiplex channels over the TLS connection we establish. This method allows us to have a bidirectional command channel along with multiple channels for consoles at the same time.

 

Filed Under: Project-FiFo

FiFo Cloud – multi cloud management for everyone

November 10, 2017 By Heinz N. Gies

Today we are delighted to introduce the beginning of alpha access to FiFo.cloud. For the past five years, we have been working to create the most resilient on-premise open source cloud orchestration system. With Fifo.cloud not only is there no longer the need to setup FiFo, but now your cloud can span multiple locations whether that be your own DC, public cloud, or a customer site with limited infrastructure.

At its origin, FiFo was born out of the need to manage lightweight containers in a multi-tenant fashion, and at the time there was no sound solution for that. Today Fifo.cloud closes the loop; it is what Project-FiFo always has been but now more accessible than ever. Fifo.cloud retains Project-FiFo’s ability to scale to many hypervisors and manage massive numbers of containers, but it also eliminates the overhead of running on-premise Project-FiFo. With all the orchestration provided as a service, only a small agent must be installed on the hypervisor. FiFo-agent requires only a few dozen megabytes of memory, contrasted to other solutions that need gigabytes or even entire servers’ worth of resources.

We are very excited about what we have put into the alpha so far, and are looking forward to releasing even more features. We look forward to when containers all over the globe can be managed from a single piece of glass easily and effortlessly.

This is a limited alpha so that we will have a waitlist. Participants will have access to a free version while we are in testing. We will ask alpha users to commit to providing feedback, and understand that the system is under heavy development. Once we move into a production phase we still want to make this accessible to everyone, not just large users, so we are exploring ways to have a free tier. We love what we have built and are excited to share it with the world, if you are interested in the alpha, please sign up soon, as “space is limited.”

The beauty is that we use the same technology for FiFo.cloud that we are releasing as our open source FiFo so both these parts will grow from this together.

Filed Under: Project-FiFo

Creating a jail with FiFo and Digital Oceans

October 13, 2017 By Heinz N. Gies

This tutorial covers a simple setup of FiFo inside of Digital Oceans, starting with the installation continuing with configuration and finally creating our first jail.

Disclaimer

Before you get too excited, I want to put the bad news in the beginning DO’s private networks do not allow for multicast traffic which means FiFo’s node discovery does not work beyond a single node by default. There are a few ways around that (like tunnels or VPN’s), but that’d be beyond the scope of this little write-up.

Now the good news, FiFo works flawlessly on Digital Oceans FreeBSD image, the installation takes just a few minutes, and it works stable. The whole process takes a bit less than 30 minutes, most of that time spent compiling the kernel to enable RCTL and VNET which does not come enabled by default.

The tutorial is not meant to create a production system, security concerns are woefully ignored. This is to spend a few hours playing with FiFo and then tearing it down again. If you plan to run FiFo in production please follow the official documentation or contact us for commercial support.

Creating the Droplet

For this example, we chose the FreeBSD 11.1 ZFS image and the 8GB / 4 CPU droplet so compiling doesn’t take too much time.

DigitalOcean - Create Droplet

Select your location (I like Frankfurt it’s nice and close),  and the SSH key to use. Because this will be a single node system there is no need for the “Private networking” option. With all the desired settings selected go ahead and create your droplet. It’ll take a minute or two to boot.

DigitalOcean - Droplets

Configuring the system

Now we just install FiFo as covered in the documentation.

We download the kernel code. The command is:

fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/`freebsd-version -k`/src.txz -o /tmp/src.txz
tar -C / -xzf /tmp/src.txz

This will take a moment or two, there are lots of files to download. Once we’re done, we’ll make our own kernel with VNET and RCTL, this too is described in the documentation, but for brevity here we go. We start with creating a custom configuration.

cd /usr/src/sys/amd64/conf
cat > FIFOKERNEL <<EOL
include GENERIC
ident FIFOKERNEL

options         VIMAGE # VNET/Vimage support
options         RACCT  # Resource containers
options         RCTL   # same as above
EOL

cd /usr/src

After we’ve set up the configuration, we build and then install the kernel. Again, this commands will take a few minutes so grab a coffee while you wait.

# Build the kernel
make -j4 buildkernel KERNCONF=FIFOKERNEL
# Install the new kernel
make -j4 installkernel KERNCONF=FIFOKERNEL

With the kernel installed, we set up a few modules to be loaded on the next boot. Namely the Linux emulator the and related modules as well as well as enabling RCTL. After that, we reboot our droplet.

cat <<EOF >> /boot/loader.conf
linux64_load="YES"
linux_load="YES"
fdescfs_load="YES"
linprocfs_load="YES"
linsysfs_load="YES"
tmpfs_load="YES"
mac_portacl_load="YES"
kern.racct.enable=1
EOF

reboot

Once the system is back up, we can double check that our kernel was loaded.

uname -a
FreeBSD fifo-bsd 11.1-RELEASE-p1 FreeBSD 11.1-RELEASE-p1 #0 r324521: Wed Oct 11 09:12:27 UTC 2017     root@fifo-bsd:/usr/obj/usr/src/sys/FIFOKERNEL  amd64

All good, next up we create a few ZFS datasets to organize jails and FiFo data.

zfs create zroot/data
zfs set mountpoint=/data zroot/data
zfs create zroot/data/sniffle
zfs create zroot/data/snarl
zfs create zroot/data/howl
zfs create zroot/jails
zfs set mountpoint=/zroot/jails zroot/jails

* Now is a good time to make a snapshot using the DO dashboard if you plan on going through the setup more than once.

Configuring Networking

We need to configure some networking. FiFo uses a bridge to attach VNETS, we can create it using:

cat <<EOF >> /etc/rc.conf
cloned_interfaces="bridge0"
autobridge_interfaces="bridge0"
autobridge_bridge0="vtnet0"
ifconfig_bridge0="inet 192.168.1.1/24"
EOF

For Jails to have real internet access you’ll need a NAT set up, we will make a simple NAT with PF but this should only be used for testing. Also of course this only allows traffic out, in a production setup an ip pool for jails is best practice.

cat <<EOF > /etc/pf.conf
IP_PUB="`ifconfig vtnet0 | grep "inet " | grep -v "inet 10" | cut -d: -f2 | cut -d\  -f2`"
NET_JAIL="192.168.1.0/24"
scrub in all
nat pass on vtnet0 from \$NET_JAIL to any -> \$IP_PUB
pass out log on vtnet0 proto { tcp, udp, icmp } all
EOF

cat <<EOF >> /etc/rc.conf
pf_enable="YES"
gateway_enable="YES"
EOF

/etc/netstart 
pfctl -nf /etc/pf.conf
service pf start

Installing the base components

Project-FiFo provides a pkg repository so installing the components is quite easy. To get the packages, we first tell the system about our new repository.

mkdir -p /usr/local/etc/pkg/repos
cat <<EOF > /usr/local/etc/pkg/repos/ProjectFiFo.conf
ProjectFiFo: {
  url: "pkg+https://freebsd.project-fifo.net/rel/amd64/11.0",
  mirror_type: "srv",
  enabled: yes
}
EOF
pkg update

Now we install the base services, those are required on every hypervisor not only managing systems. Since we run all on one host, it doesn’t make a huge difference, but let’s do it correctly.

pkg install vmadm chunter zlogin

cat  <<EOF >> /etc/rc.conf
zlogin_enable="YES"
chunter_enable="YES"
vmadm_enable="YES"
EOF

Since the DO FreeBSD comes with an unusual configuration of two IP’s on the same interface, FiFo can’t automatically detect the network configuration. So before we start the services we need to make sure chunter’s config file exists. Since we want everything on one host it’s rather easy we can just copy over the example.

cp /usr/local/lib/chunter/etc/chunter.conf.example /usr/local/lib/chunter/etc/chunter.conf

Now we start the services.

service zlogin start
service chunter start

Installing the management system

After the base services are installed and started we install the management system, this too is done via packages.

pkg install bash fifo-sniffle fifo-snarl fifo-howl fifo-cerberus

Before we start the services, we configure howl to run on 8080 and 8443, so we do not require privileged ports. We do this by editing /data/howl/etc/howl.conf and changing the following lines:

## The port howl listens on for websockets
##
## Default: 80
##
## Acceptable values:
##   - an integer
http_port = 8080

## The port howl listens to.
##
## Default: 443
##
## Acceptable values:
##   - an integer
ssl.port = 8443

Now we can boot up the system, the start order doesn’t matter as the components are self-organizing.

cat  <<EOF >> /etc/rc.conf
sniffle_enable="YES"
snarl_enable="YES"
howl_enable="YES"
EOF

service sniffle start
service snarl start
service howl start

Before we go on it’s a good moment to see if all the services are running:

# ps -aux | grep beam
howl    3135 571.1  1.3 1795680 108780  -  Ss   09:34    0:02.73 /usr/local/lib/howl/erts-8.3.5.1/bin/beam.smp -P 256000 -A 64 -W w -- -root /usr/local/lib/howl -progname howl -- -home /data/howl -- -boot /usr/local/lib/
snarl   3037 403.3  1.1 1803380  90676  -  Ss   09:33    0:03.15 /usr/local/lib/snarl/erts-8.3.5.1/bin/beam.smp -P 256000 -A 64 -W w -- -root /usr/local/lib/snarl -progname snarl -- -home /data/snarl -- -boot /usr/local/
sniffle 2487   4.1  1.3 1882760 112460  -  Is   09:33    0:04.03 /usr/local/lib/sniffle/erts-8.3.5.1/bin/beam.smp -P 256000 -A 64 -W w -- -root /usr/local/lib/sniffle -progname sniffle -- -home /data/sniffle -- -boot /us
root    1074   0.1  0.8 1734176  63308  -  Is   09:28    0:02.56 /usr/local/lib/chunter/erts-8.3.5.1/bin/beam.smp -P 256000 -A 64 -W w -- -root /usr/local/lib/chunter -progname chunter -- -home / -- -boot /usr/local/lib/
root     974   0.0  0.5 1691208  39304  -  Is   09:25    0:00.77 /usr/local/lib/fifo_zlogin/erts-8.3.5.1/bin/beam.smp -A30 -- -root /usr/local/lib/fifo_zlogin -progname usr/local/lib/fifo_zlogin/bin/fifo_zlogin -- -home

Configuring FiFo

So with everything set up we have to do two more configuration steps. First, we disable FiFo’s local dataset cache, which when configured uses LeoFS that isn’t set up yet. Second, we create an initial user and permission structure.

# disable leofs dataset cache
sniffle-admin config set storage.s3.host no_s3

# Initialize user
snarl-admin init default Project-FiFo Users admin <secret password>

# Add FreeBSD dataset repository
sniffle-admin datasets servers add https://bsd.project-fifo.net/images

Now we can go to our UI and take a look it should be at http://<your ip>.xip.io:8080, we are greeted by Cerberus’s login screen.

We need to configure a package. For that navigate to configure -> packages, let’s name it ‘small’, give it 100% CPU (this means 1 full core) a gigabyte of memory and 5 gigabytes of disk.

 

Next, we configure an IP range. Jails get a static IP address, IP Ranges can be understood as DHCP IP pools. We navigate to Configuration -> IP Ranges. Use the following settings that match the NAT’d network we setup earlier:

Name:       private
NIC Tag:    admin
VLAN:       0
Subnet IP:  192.168.1.0
Netmask:    255.255.255.0
Gateway:    192.168.1.1
First:      192.168.1.2
Last:       192.168.1.254

Note that we used ‘admin’ as nic tag, this relates to the setting in vmadm and does not need to be changed at this point.

Now we create a network, which is an abstraction over IP ranges to allow for fragmented or multi-range logical networks. It is somewhat straightforward it only has a name. Once created we edit the network to assign the IP Range we just created to it. Navigate to Configuration -> Network.

 

Last but not least we have to tell FiFo what datasets are available, for this go to the Datasets tab and import the FreeBSD 11.1 dataset.

 

The first jail

Now with everything set the only thing left to do is create our jail. Navigate to ‘Machines’ and fill out the fork with the dataset, package, and info we just inserted.

Once we click Create, we have to wait for a moment until the dataset is imported and the jail created. We can watch the state change through the states until it finally reaches running.

Now click on the little three dots on the right and select console. Your browser may complain about pop-ups if it does you have to allow them. This gives you a console to your newly created jail.

Filed Under: Project-FiFo

Project-FiFo 0.9.3 release: Hello FreeBSD!

October 9, 2017 By Heinz N. Gies

Since its inception, Project-Fifo has run exclusively on Illumos based operating systems. At first Smartos, and later OmniOS and Oracle Solaris. Today we are happy to announce the immediate availability of release 0.9.3 and with it FreeBSD support.

FreeBSD and Illumos have always seemed to be good friends rather than competitors in the operating system landscape. Not only do you rarely hear something bad said about the other from the communities, but there is a healthy level of respect and cross-pollination going on. BSD Jails inspired Zones; Crossbow inspired VNet, there is now DTrace, kstat and ZFS on BSD, and bhyve on SmartOS.

It is refreshing to see such cooperation in an otherwise extremely competitive field. This fantastic attitude inspires us. We are no Kernel engineers or distribution maintainers, but we know a bit about cloud management.

So in this spirit, we started making a clone of vmadm, a tool we learned to love on SmartOS, for FreeBSD. It is not done or perfect yet, but we are very proud of the progress and the excellent feedback and contributions to it we have already gotten.

However, that is not all, with vmadm on FreeBSD fifo, our cloud management system now works in combined environments letting users mix and match hypervisors and run FreeBSD jails along with SmartOS zones. In a way bringing two friends together.

As usual, we published the release notes (with over 100 related tickets!) as well as the update manual as part of our documentation.

We will release an all in one dataset later this week.

Filed Under: Project-FiFo

FreeBSD comes to FiFo 0.9.3 with vmadm

September 28, 2017 By Heinz N. Gies

FiFo 0.9.3 has been in the works for a while, and it comes with quite a few new features. With our last release, we started experimenting with FreeBSD support. Since then much work has gone into improving this. We also did something rather exciting with the mystery box! However, more on that in a later post.

The stable release of 0.9.3 will land within a few days with only packaging and documentation tasks left to do. Part of this means that we’ll have packages for all major components that work natively on BSD. There is no more need for a SmartOS box to run the components!

FreeBSD

When we introduced FreeBSD support last version we marked it as an experimental feature. We needed to try out and experiment what works and what does not. Understand the way FreeBSD does things, what tools exist, and how those align with our workflow. Bottomline we were not even sure BSD support was a thing in the future.

We are happy to announce that with 0.9.3 we are now sure BSD support is a thing, and it is here to remain. That said it was good that we experimented in the last release, we did some significant changes to what we have now. When first looking at FreeBSD we went ahead and used existing tooling, namely iocage, to manage jails. It turns out the tooling around jails is not on par with what exists on illumos and especially SmartOS. The goodness of vmadm as a CLI for managing zones is just unparalleled. So we do what every (in)sane person would do!

vmadm

So with 0.9.3, we did what every (in)sane person would do! We implemented a version of vmadm that would work with FreeBSD and jails and keep the same CLI. Our clone works completely stand alone; vmadm is a compiled binary, written in rust which is blazing fast! The design takes up lessons learned from both zoneadm and vmadm in illumos/SmartOS for how things work instead of trying to reinvent the wheel. Moreover, while we love giving the FreeBSD community a tool we learned to love on SmartOS this also makes things a lot easier for us. FiFo now can use the same logic on SmartOS and FreeBSD as the differences are abstracted away inside of vmadm. That said there are a few notable differences.

First of all, vmadm uses datasets the same way it does on SmartOS. However, there is no separate imgadm tool. Instead, we encapsulate the commands under vmadm images. To make this work we also provide a dataset server with base images for FreeBSD that used the same API as SmartOS dataset servers. Second, we needed to work around some limitations in VNET to make jails capable of being fully utilized in multi-tenancy environments.

nested jails

nested vnet jails on freebsd While on illumos a virtual nic can be bound to an IP that can not be changed from inside the zone, VNET does not support this. Preventing tenants from messing with IP settings is crucial from a security standpoint!

To work around that each jail created by vmadm are two jails: a minimal outer jail with nothing but a VNET interface, no IP or anything and an internal one that runs the user code. This outer jail then creates an inner jail with an inherited NIC that gets a fixed IP, combining both the security of a VNET jail as well as the security of a fixed IP interface.

The nested jail layout resembles the way that SmartOS handles KVM machines, running KVM inside a zone. So in addition to working around VNET limitations, this already paves the way for bhyve nested in jails that might come in a future release. We hope to leverage the same two-step with just a different executable started in the outer jail instead of the jail command itself.

Filed Under: Project-FiFo

  • 1
  • 2
  • 3
  • Next Page »
  • GITHUB
  • DOCS
  • WEBSITE
  • CONTACT
  • TICKETS
  • DISCLAIMER
Copyright © 2022 Project-FiFo