Recommendation - Automated Linux Ubuntu Desktop Deployment Tool
A recommendation for an automated deployment tool, either commercial or open source (recent supported tool) to Deploy 400 Linux Ubuntu Desktop 20.04 LTS/21 workstations which an Expert has used to 400 Physical workstations which may or may not be the same hardware build using Nvidia GPU, which uses an additional software stack.
It would also have to add to Active Directory, create unique hostnames, add OpenOffice.
Something akin or similar to Microsoft MDT for Windows, but for Linux deployments.
Any ideas ?
ASKER
Thanks David
This approach is no different to what we did in 1996! But we didn’t have Ansible.
Ansible would be okay for maybe applying multiple changes across the estate but there is the deployment piece missing.
We are looking for something more end to end.
Currently discussing this with Ubuntu to see if they have anything!?
This approach is no different to what we did in 1996! But we didn’t have Ansible.
Ansible would be okay for maybe applying multiple changes across the estate but there is the deployment piece missing.
We are looking for something more end to end.
Currently discussing this with Ubuntu to see if they have anything!?
Did you check out Terraform? https://terraform.io/ (maybe to closes to ansible).
Or Canonical Maas? https://maas.io/
(I never used them, they came up when i tried to find something to run a private vDC, I settled on proxmox).
Or Canonical Maas? https://maas.io/
(I never used them, they came up when i tried to find something to run a private vDC, I settled on proxmox).
ASKER
@noci
Thanks for your reply.
We are familiar with MAAS and we currently use MAAS, but it's designed for servers not Desktops, and we are having some difficulties yet again with the Open Source, curtinator and launchpad to get things moving, as it would seem it's a fudge!
Not investigated Terraform, we'll have a look now.
Edit: Terraform all looks Cloudy, and this is Physical deployment of Desktops.
Thanks for your reply.
We are familiar with MAAS and we currently use MAAS, but it's designed for servers not Desktops, and we are having some difficulties yet again with the Open Source, curtinator and launchpad to get things moving, as it would seem it's a fudge!
Not investigated Terraform, we'll have a look now.
Edit: Terraform all looks Cloudy, and this is Physical deployment of Desktops.
To what extend are networked desktops different from servers?... other than the installed software.
If the MAC address is known as a special case it can be setup as such using a network boot,
OR all systems have been setup to boot from the network and , one can boot in a network provided "disk" and recognize what needs to be done on the system and act on it... (might be pivot to a local installed root, or kexec into a local kernel).
If the MAC address is known as a special case it can be setup as such using a network boot,
OR all systems have been setup to boot from the network and , one can boot in a network provided "disk" and recognize what needs to be done on the system and act on it... (might be pivot to a local installed root, or kexec into a local kernel).
i am unsure his is what you want to hear but i often perform that kind of stuff with rather simple do it yourself scripts.
you would essentially need to setup PXE boot that installs the os and runs an installation script.
i tend to setup installation scripts in cron so they download other scripts from a central server and run them. this is arguably much safer and easier to setup than pushing stuff. a simple web server or even the same tftp server you setup for pxe is more than enough.
- hooking the machines to ad is not entirely trivial but not too complicated either.
- installing openoffice is just a single apt install oo command.
- generating unique host names can be performed rather easily based on mac addresses or possibly ids provided by the active directory, or simply pushed by dhcp which i assume you probably use.
you would essentially need to setup PXE boot that installs the os and runs an installation script.
i tend to setup installation scripts in cron so they download other scripts from a central server and run them. this is arguably much safer and easier to setup than pushing stuff. a simple web server or even the same tftp server you setup for pxe is more than enough.
- hooking the machines to ad is not entirely trivial but not too complicated either.
- installing openoffice is just a single apt install oo command.
- generating unique host names can be performed rather easily based on mac addresses or possibly ids provided by the active directory, or simply pushed by dhcp which i assume you probably use.
ASKER
Is there anything that exists like MDT ?
Yes, PXE and scripts is what we did in 1996!
Yes, PXE and scripts is what we did in 1996!
andrew, the least trivial part would be the pxe setup which is quite poorly documented (unless things changed recently).
you need to grab a preseed file and extract network settings. those need to be passed as kernel arguments.
beware the disk layout is quite hard to figure out
the rest is straightforwards from the documentation
you need to grab a preseed file and extract network settings. those need to be passed as kernel arguments.
beware the disk layout is quite hard to figure out
the rest is straightforwards from the documentation
not that i know of but probably yes. the issue is if you want to cover os installation, there is no way anything can push an os to a remote computer that has nothing installed previously
ASKER
So nothing has changed, and there are no products, that have been developed since 1996 for Linux.
This is something that needs to be like MDT, for Engineers to visit a PC, and Deploy with ease like MDT.
At present, an Engineer can record the MAC Address, Enter into into a Database, reboot the PC (they are set to BOOT from PXE over network) and 45 minutes later they have a Windows PC joined to domain, with the correct software stack, and patched, ready for a user to login.
1 or 400 PCs can be deployed in 45 minutes, with little effort.
Linux Desktop - same system required.
This is something that needs to be like MDT, for Engineers to visit a PC, and Deploy with ease like MDT.
At present, an Engineer can record the MAC Address, Enter into into a Database, reboot the PC (they are set to BOOT from PXE over network) and 45 minutes later they have a Windows PC joined to domain, with the correct software stack, and patched, ready for a user to login.
1 or 400 PCs can be deployed in 45 minutes, with little effort.
Linux Desktop - same system required.
if you want to deploy disk images, there are indeed many alternatives such as clonezilla.
it is very likely vendors such as redhat provide some tool that is similar to MDT. i do not believe they would be simpler to setup than the above, though.
it is very likely vendors such as redhat provide some tool that is similar to MDT. i do not believe they would be simpler to setup than the above, though.
ASKER
But Clonezilla is not automated, we might as well give Engineers USB sticks, but then we are reliant upon their skills to type off a cheat sheet to get things correct.
MDT does require some development, to customize, but it does not take that long. (7 man hours, and you'll have a PC rolled out)
MDT does require some development, to customize, but it does not take that long. (7 man hours, and you'll have a PC rolled out)
note that such setups that just pull and make available the latest LTS system periodically have been used by some of my clients for years and survived multiple changes. at least one has been working since ubuntu14.
no need to recreate a master whenever the os changes. that might be an argument to bother setting it up.
no need to recreate a master whenever the os changes. that might be an argument to bother setting it up.
i do not get your point regarding clonezilla being automated. it can be setup as a pxe server and can even distribute multiple images
At present, an Engineer can record the MAC Address, Enter into into a Database, reboot the PC (they are set to BOOT from PXE over network) and 45 minutes later they have a Windows PC joined to domain, with the correct software stack, and patched, ready for a user to login.
1 or 400 PCs can be deployed in 45 minutes, with little effort.
so ? with the above setup, an engineer records nothing in the database. just boot the system in the adequate network. installation typically take about 5-10 minutes if there is a proxy or fast internet connection.
with clonezilla, that would depend on how long it takes to transfer the master over your network. i believe minutes would suffice + another minute or 2 to dd the disk image
ASKER
So you are making an assumption that the Engineer knows Linux, knows how to change hostname, add to domain, etc etc etc
The skill level of the Engineer is familiar with Windows, not Linux, so boot machine, and it gets deployed, ready for the End user login.
At present MDT is really lights out deployment. (with very little development).
The skill level of the Engineer is familiar with Windows, not Linux, so boot machine, and it gets deployed, ready for the End user login.
At present MDT is really lights out deployment. (with very little development).
i must state i am quite baffled by continuously seeeing windows admins fighting for weeks to setup their golden expensive tools, write GPOs, bother with batch and powershell scripts, dive into registery hacks, and yet pretend it is complicated to setup a couple of config files and write a 3 lines shell script
you can find a script that adds a linux machine to a domain online quite easily
you can change the hostname using the hostname command or setup dhcp to provide host names.
you can change the hostname using the hostname command or setup dhcp to provide host names.
ASKER
If we have to develop an equivalent MDT for Linux, then that's what we will have to do, if there is NOT an off the shelf product, and clearly there isn't!
It's PXE and a bunch of scripts, just like 1996 ! I had hoped that things had matured since then. (in the physical realm)
It's PXE and a bunch of scripts, just like 1996 ! I had hoped that things had matured since then. (in the physical realm)
You need the Kernel + initial Ramdisk.... (then no other hardware as Network adpater, Memory & CPU are involved).
From the initial RAM disk you can do ANYTHING it is mostly used to discover a local disk, ... LVM, etc. and set that up, possibly detecting encrypted containers etc... and at last pivot over to the / of the intended system.
This CAN be automated if needed. In this aspect there is no difference between a desktop/networked laptop/diskless system or full blown server.
From the initial RAM disk you can do ANYTHING it is mostly used to discover a local disk, ... LVM, etc. and set that up, possibly detecting encrypted containers etc... and at last pivot over to the / of the intended system.
This CAN be automated if needed. In this aspect there is no difference between a desktop/networked laptop/diskless system or full blown server.
ASKER
Yes, we did all that in 1996!
I had hoped that things had matured since then. (in the physical realm)
e.g. it's not off the shelf.
I had hoped that things had matured since then. (in the physical realm)
e.g. it's not off the shelf.
For adding stuff to domains.... check IPA (linux based AD equivalent) or if bridge to AD is needed so users have stable UID/GID's. And SSS the client stuff.
all tools are fully script controllable.
all tools are fully script controllable.
or just configure your master adequately and clone the systems.
How hard is it to obtain another Golden image from a tftp server and DD it to a local harddisk.. a few KB of tools to add to the bootable image.
Are you looking for a solution or looking for a stick to hit a dog?
Unix systems were doing remote boots in the 1980's with esp. with diskless systems as workstations.
Are you looking for a solution or looking for a stick to hit a dog?
Unix systems were doing remote boots in the 1980's with esp. with diskless systems as workstations.
ASKER
FOG came up in our research other than it does not look like it's been maintained since 2016/2020, with no reference since Ubuntu 16. It's been download but not tested.
many such tools are not maintained because given the simplicity of the task which is to dd an image to a disk, there is nothing much to maintain. it just works.
i am unsure what you want. you want to bother maintaining system images and pay for a free tool to deploy them ?
i am unsure what you want. you want to bother maintaining system images and pay for a free tool to deploy them ?
ASKER
The risk, selecting open source fails in the organisation versus internal development, versus a Commercial solution with ongoing support and development.
from the example 400 desktops, is not a small business or your bedroom.
It seems FOG is being developed on GitHub, but seems to require funding!
As suspected FOG fails install! so that's more time discussing with FOG! (does not support 21.1)
from the example 400 desktops, is not a small business or your bedroom.
It seems FOG is being developed on GitHub, but seems to require funding!
As suspected FOG fails install! so that's more time discussing with FOG! (does not support 21.1)
so what ? we use that kind of things for PRODUCTION SERVERS and have been doing so for years. there are probably over a thousand production machines running with my tooling at the moment. many of them are reinstalled on a regular basis as it is in some places the standard upgrade process.
if you want someone to pin eventual failures on, pick something like symantec's or whatever overpriced ghosting solution. then you will have to maintain the system images which will be quite a pita.
if you want someone to pin eventual failures on, pick something like symantec's or whatever overpriced ghosting solution. then you will have to maintain the system images which will be quite a pita.
and include ansible agent or something similar in your masters so you can setup an industry standard tool and bother doing in ansible what you can do with a much simpler shell script.
ASKER
Your organisation must have different requirements to the software stack it uses as far as Security and Governance.
These are the constraints we have to work with.
These are the constraints we have to work with.
then let us know what your requirements are
from what i gather
- you want to deploy using masters
- you want a paid tool to do the deployment
craft the masters, use a paid cloning server. there are tons of companies that provide them. some of them are in Gartner.
what are we missing ?
from what i gather
- you want to deploy using masters
- you want a paid tool to do the deployment
craft the masters, use a paid cloning server. there are tons of companies that provide them. some of them are in Gartner.
what are we missing ?
Unix, Linux is not known for using fancy polished point & click interfaces to manage them (although even those do exist). There is no single solution as there is no single all encompassing problem.
From various points of view several tools started to do almost the same... puppet, ansible, chef, ... all tackle system config they all have a different prepositions.
Some things worked since 1972 ... cp command, rm command etc. etc. init is another example. They wrote a secure way around 1978-ish (SYS-V) that was/is still in use.. Someone tries to replace it with a "better" way (systemd) more or less modeled after windows... and since then there have been several issues (also security issues allowing even RCE) ... Sometimes code is perfect as it is... original init was about two sheets of A4 easy to audit etc.
In short NEW & MAINTAINED != BETTER.
From various points of view several tools started to do almost the same... puppet, ansible, chef, ... all tackle system config they all have a different prepositions.
Some things worked since 1972 ... cp command, rm command etc. etc. init is another example. They wrote a secure way around 1978-ish (SYS-V) that was/is still in use.. Someone tries to replace it with a "better" way (systemd) more or less modeled after windows... and since then there have been several issues (also security issues allowing even RCE) ... Sometimes code is perfect as it is... original init was about two sheets of A4 easy to audit etc.
In short NEW & MAINTAINED != BETTER.
hey let us start a company.
- a small db to map mac addresses to image names and other options
- tftp with dnsmasq
- excel sheets to interact with the db (target companies are so fond of them no point in writing connectors and apis)
- builtin images that installs latest versions of major distribs unattended and maybe esxi as well
- a fancy ui to wrap it all
- closed sources
- cpuburn on boot so it looks like it is busy
- extra storage boxes and whitepapers to run it hooked to a san with fiber connections
- expensive maintenance contracts
- builtin updates that update the ui with a new fancy button twice a year
and grab some innovation financing from whichever organism
why bother doing some real work if that's all it takes
- a small db to map mac addresses to image names and other options
- tftp with dnsmasq
- excel sheets to interact with the db (target companies are so fond of them no point in writing connectors and apis)
- builtin images that installs latest versions of major distribs unattended and maybe esxi as well
- a fancy ui to wrap it all
- closed sources
- cpuburn on boot so it looks like it is busy
- extra storage boxes and whitepapers to run it hooked to a san with fiber connections
- expensive maintenance contracts
- builtin updates that update the ui with a new fancy button twice a year
and grab some innovation financing from whichever organism
why bother doing some real work if that's all it takes
ASKER
Recommended solution Official from Canonical
USB is still the best way to deploy Ubuntu to Desktops/ Laptops.
if you actually want to find a solution, we are here to help and experimented enough to do so. i mean this seriously.
if you are merely looking for a pretext to keep with windows and rant about 1996, the is no point in dragging this out.
from where i stand, you have been tasked by managers to deploy linux desktops which mean their mindset and requirements may not totally match yours regarding that matter.
if you are merely looking for a pretext to keep with windows and rant about 1996, the is no point in dragging this out.
from where i stand, you have been tasked by managers to deploy linux desktops which mean their mindset and requirements may not totally match yours regarding that matter.
USB boot does: load kernel (possibly including initramfs) and the start from the rootfs. (initramfs or if absent, usb rootfs).
which can be achieved using network booting as well.
The standard USB RootFS/Initramfs will have the code to setup & initialise the system.
which can be achieved using network booting as well.
The standard USB RootFS/Initramfs will have the code to setup & initialise the system.
- https://docs.oracle.com/cd/E19045-01/b200x.blade/817-5625-10/Linux_PXE_boot.html (more detailed doc).
- https://docs.oracle.com/cd/E56301_01/html/E56308/z40005af1026698.html (more recent doc)
(Oracle Linux is a RHEL derivate like CentOS used to be.)
This is a generic description: https://linuxconfig.org/network-booting-with-linux-pxe
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
the fact that we do not know a commercial product does not mean it does not exist. redhat likely has one.
you are perfectly unable to express your requirements clearly. i did provide at least one commercial solution that is actually maintained.
you want something in a context that just looks like what you know in a different one and do not state your needs. much like complaining that you cannot see the flames on your brand new electric cooker.
that mindset is wrong. you should DELETE this question rather than accepting your own answer which states multiple likely false claims and does not even mention which solution from 1996 you use.
btw. sorry for the joking comment which was motivated by your ranting but indeed out of line.
you are perfectly unable to express your requirements clearly. i did provide at least one commercial solution that is actually maintained.
you want something in a context that just looks like what you know in a different one and do not state your needs. much like complaining that you cannot see the flames on your brand new electric cooker.
that mindset is wrong. you should DELETE this question rather than accepting your own answer which states multiple likely false claims and does not even mention which solution from 1996 you use.
btw. sorry for the joking comment which was motivated by your ranting but indeed out of line.
may i dare remind you of your own question, btw
<<<
A recommendation for an automated deployment tool, either commercial or open source
>>>
<<<
A recommendation for an automated deployment tool, either commercial or open source
>>>
had you mentionned commercial i would not have stepped in. i do not know one nor am interested into whichever may exist and i do not play google monkey
RHEL & derivatives have Kickstart as
Debian: https://www.debian.org/releases/etch/i386/ch04s07.html.en
(Ubuntu is based on debian)...
Ubuntu: https://ubuntu.com/server/docs/install/autoinstall
Citation:
So imho debian preseeds should still work.
Then i found this one: https://theforeman.org/ which might actually be the answer to your question...
also note there is a thin line between a "server" and a "desktop"... the latter has a GUI by default..... and the former should not have a GUI.
Otherwise the systems have the same software packages from the same CPU architecture.
Debian: https://www.debian.org/releases/etch/i386/ch04s07.html.en
(Ubuntu is based on debian)...
Ubuntu: https://ubuntu.com/server/docs/install/autoinstall
Citation:
where they later state that for server they now use cloud-init..Differences from debian-installer preseeding
preseeds are the way to automate an installer based on debian-installer (aka d-i).
So imho debian preseeds should still work.
Then i found this one: https://theforeman.org/ which might actually be the answer to your question...
also note there is a thin line between a "server" and a "desktop"... the latter has a GUI by default..... and the former should not have a GUI.
Otherwise the systems have the same software packages from the same CPU architecture.
preseed does work for ubuntu. i have that running in production at multiple clients.
afaik, kickstart can also be used to deploy debian based systems and actually pretty much any linux including linux from scratch though i haven't used it in ages.
---
note that the linux world does change.
as an example, the above requirements requires a single master. xorg or wayland is perfectly capable to select the most adequate driver on startup without config so just include all of them for a few Mb extra cost.
likewise, it is trivial to run a dialog based script on startup that prompts for host names and the likes. and even more trivial to use dhcp or the mac address, or the cpu id or whatever other unique number or some random data to generate host names.
afaik, kickstart can also be used to deploy debian based systems and actually pretty much any linux including linux from scratch though i haven't used it in ages.
---
note that the linux world does change.
as an example, the above requirements requires a single master. xorg or wayland is perfectly capable to select the most adequate driver on startup without config so just include all of them for a few Mb extra cost.
likewise, it is trivial to run a dialog based script on startup that prompts for host names and the likes. and even more trivial to use dhcp or the mac address, or the cpu id or whatever other unique number or some random data to generate host names.
https://www.youtube.com/watch?v=hOh2bNQmOCI
How to: https://www.digitalocean.com/community/tutorials/how-to-use-ansible-to-automate-initial-server-setup-on-ubuntu-18-04