5 Backup Tools Built-in to Linux

Want to see a more detailed video on this article? Click here!

There are plenty of backup tools available in Linux, but I prefer applications that are already built-in to the distro you are using.  Let’s take a look at 2 GUI tools you can use and 3 CLI tools.  This article is written mainly for Fedora, as that’s the distribution I use. All the solutions should be available on any distro as well as FreeBSD.  The CLI tools are also available on MacOS. Let’s get started!

1. The Gnome Disk Utility

The first tool is a GUI tool that is a part of the Gnome desktop. If you don’t have Gnome installed, you can still install the “Disks” graphical tool:

sudo dnf install gnome-disk-utility

I recommend exporting your image file to an external drive. The drive will need to be bigger than the partition you are backing up. Keep in mind the image size will be the same size as the partition you are backing up.

For more information, click here.

2. Dolphin, Nautilus, or Other File Management Tool

There are several file management tools for any distro and desktop interface you are using in Linux. The 2 most popular in Linux are Dolphin and Nautilus. If you ever used File Manager in Windows or Finder in MacOS, this should be easy for you. Simply select the file or folder you want to backup, and drag it to external media (usually a USB drive or thumb drive).

The nice thing about this backup is you can select specific files and/or folders. You don’t need to backup the entire partition, which saves time and disk space on the target device you are backing up to.

I suggest backing up at least your home directory. My user name is mark, so the path to my home directory is /home/mark. In the example image below, I’m backing up my Documents folder, located at /home/mark/Documents.

3. The dd CLI Tool

Similar to the Disks GUI tool, dd will make a backup image of a partition or disk. The difference is it’s a command line tool. If you don’t know the disk or partition you want to backup, you can use fdisk first:

fdisk -l

The drive we are going to backup is /dev/sdf. It helps if you know the device size so it’s easier to identify in fdisk. You can also use the Disk Utility in Gnome or the Partition Manager in KDE Desktop. Once we know the drive to backup, we type in the dd command:

dd if=/dev/sdf of=/mnt/4tb/2gb_sd.img

We can also backup just a partition on a particular drive or device. For example, if we want to backup partition 1 on sdf, we’d use this command:

dd if=/dev/sdf1 of=/mnt/4tb/2gb_sd_part1.img

Note that we used sdf1, just as we saw in the fdisk output earlier. The image was renamed 2gb_sd_part1.img so we know what exactly we put in the backup.

We can also backup from a disk to a disk, and avoid having to create an image file. Note that the target drive must be equal to or larger than the source drive/partition:

dd if=/dev/sda of=/dev/sdc conv=noerror,sync

Note that we are sending the /dev/sda source to the /dev/sdc drive. You have to make sure you’ve selected the right target device/drive or you will erase a drive you need!!

For more information, click here.

4. The tar command

an acronym for “Tape Archive,” tar was originally used to backup files, directories or disks to a tape drive, which was assigned a device, such as /dev/st0. In this case, we’re going to use tar to backup our documents directory located in the home folder of user mark:

tar -zcvpf /mnt/4tb/Backups/backup-mark-docs.tar.gz ~/Documents

The backup file “backup-mark-docs.tar.gz” will be written to my 4 TB drive in the /Backups directory. You may be wondering what the switches are for:

z : Compress the backup file with ‘gzip’ to make it small size.

c : Create a new backup archive.

v : verbosely list files which are processed.

p : Preserves the permissions of the files put in the archive for later restoration.

f : use archive file or device ARCHIVE.

If you are restoring your documents directory back to the user’s documents directory, the command would look something like this:

sudo tar -xvpzf /path/to/backup.tar.gz /restore/location -C --numeric-owner

Note the -C in the command. This is important because it ensures the tar file gets restored to the right place. Our actual command to restore documents to the user mark would look like this:

tar -xvpzf /mnt/4tb/Backups/backup-mark-docs.tar.gz ~/Documents -C --numeric-owner

If we are using tar to restore files or directories, it would be best for a complete restore, in the event all user data was lost.

For more cool ways to use tar, check out this link.

5. The rsync command

Rsync is one of the easiest and quickest backup tools to use. It can backup from a local drive to a local drive, networked server, or even a remote server. Perhaps the best feature of rsync is that it will do an incremental backup of your source after the first time. This means any new files or changed files will get backed only, greatly speeding up the process!

Let’s use rsync to backup the documents directory again:

rsync -avzh --progress ~/Documents /mnt/4tb/Backups/docs-rsync

Rsync will automatically create the target directory if it doesn’t already exist. In the above example, the directory docs-rsync will be created on the 4 TB drive. Let’s see what happens the second time you run rsync, after updating a single file:

After updating one file, the speedup was 89,807 times faster than the first time we ran rsync. You can expect the first time to take a bit of time, but thereafter, it will be much faster.

I recommend using a larger USB thumb drive or an external USB drive so you can remove the backup if necessary and store it somewhere else. Restoring the data is a breeze, you can use a file manager tool like Dolphin or Nautilus. You can also “restore” a single document or file by simply copying it back from the backup location to the source location.

rsync can be automated in many different ways, such as by using a script to run it, or even having cron run the script for you on a regular basis. Using cron for automated backups it beyond the scope of this article, but you can click here for more information! We’ll be doing a more in depth article on rsync and cron in the future.

I hope this article was useful, and gives you a few ideas on how you can do a quick backup and save yourself the headache of losing data. Using Linux and doing backups is not nearly as daunting as it seems, once you get used to it. Don’t be afraid to give it a try!

Don’t buy a Macbook 2018 Until you Read This

Buy the wrong model of 2018 Macbook Pro, and you’ll be wasting your money.

See the video version of this blog post here!

Apple does a surprise Macbook Pro refresh, and axes the 2015 model at the same time.  You won’t see any changes from the outside, as the chassis and retina display stay exactly the same as before, but the new lineup will see a large performance boost.

Screen Shot 2018-07-14 at 10.02.24 PMGiven the choice, what would you buy? Many have complained about the Macbook Pro redesign that was released in 2016, even going so far as to call it just a Macbook, minus the “pro.” Looking at the beautiful but limited chassis, it’s easy to see why, with a missing card reader slot, any USB type A ports and no more full HDMI port.

The problems didn’t end there for the Macbook redesign, as the butterfly keyboard problems gained enough attention to trigger a class action suit. If you were thinking the problem will be fixed with the 2018 model, think again.  Although there is a Butterfly version 3 coming out, it will only protect against dust getting inside the keyboard, not the stickiness that so many have reported. Apple is also working on making Butterfly version 3 more quiet. Personally, I love the tactile feel and auditory click I hear when I type. I find it very satisfying. Unfortunately, older model Macbooks and Macbook Pros will not be getting the newer version 3 keyboard if you send it in under the keyboard replacement program.in the case of users who sent in their 2016 models, the problem simply resurfaced again within months.

On the plus side, the 2018 model of the Macbook pro is released with the newest intel processor…at least on some models.  If you still decide to buy the 13 inch base model, you’ll be stuck with the 7th generation dual core processor. The $1799 version of the 13 inch model comes with the latest 8th gen processors with 4 cores and 8 threads, and a turbo boost of 3.8 Ghz, compared to 3.6 Ghz on the 7th gen processors. I doubt you’ll really notice the turbo boost difference, but you will notice the additional cores and threads.

The new 2018 model will now include a ram option of 32GB which is great to hear and will certainly mScreen Shot 2018-07-14 at 10.06.02 PMake the Macbook Pro a contender for high level work such as video editing. Keep in mind that the 32 GB option is only available on the Macbook 15 inch, along with the 6 core 12 thread processor.  If you do want 32 GB of ram, you’ll have to add $400 to the purchase price, for a total of $2799. A PC doesn’t sound so bad afterall looking at that price. The ram is also DDR4, running at 2400 Mhz, which will also help to boost the speed of the 2018 models.

Apple still failed to update the Retina display to full 4K, something PC makers have been doing for years. It’s likely Apple would have to reduce the alleged 10 hours of battery life which they seem loathe to do.  Personally, I’d rather have a higher dpi display than longer battery life.

Perhaps what is most surprising is the fact that Apple has been doing a great job of refreshing the Macbook Pro annually since 2016, even if it is slower than the 6 month refresh cycle of similar PC laptops.  It appears as if Apple has heard the criticism about stale options regarding it’s lineup, at least with the Macbook and Macbook Pro. Let’s hope the Mac is soon to follow as well with annual hardware refreshes.

Screen Shot 2018-07-14 at 10.04.03 PM
Goodbye Macbook Pro 2015, we’ll miss you.

On a sad note, the availability of the 2015 Retina model of the Macbook Pro has come to an end with the release of the powerful 2018 version.  Dearly missed will be the highly functional chiclet keyboard, the variety of ports, and the lack of a need for a dongle for every single piece of hardware you want to connect.  Think of it this way: If Apple added a card reader, a full HDMI port and one USB A port on the 2018 model, we would have a nearly perfect Macbook Pro. Oh, and a magsafe power connector. And a chiclet keyboard.

The 2018 model does support Thunderbolt 3 on all of its USB C ports, which is a great advantage for hardware that can use it. The 2015 model only has Thunderbolt 2, but it does have two ports, without the need for a dongle. Adding to the fact that the 2015 model is now 3 full generations behind in processor design, there’s little reason to pickup a 2015.  Geekbench 4 Single core scores on the 2015 13” 2.7 Ghz model came in at 3,544 and multi core at 6743, versus the leaked scores of a 13” 2018 model single core at 4,448, and multicore of 16,607.

There really is no point in even entertaining the idea of buying a Macbook Pro 2015, unless you can find one used for great price and really only need a glorified internet machine. Cost is still a big factor when considering any Macbook however. The Macbook Pro 2018 with the new Coffee lake 4 core process, 16 GB of ram and 512 GB will run you $2199, while a similarly packaged PC laptop with an i7 processor starts at about $1600, a savings of $600.Screen Shot 2018-07-14 at 10.03.11 PM

Is the 2018 refresh enough to run out and buy a new machine if you have a 2016 or 2017 Macbook Pro?  It is if you are a heavy user and need the processing power and added memory. I was very surprised how much faster the quad core Yoga 920 was in comparison to the older dual core Yoga 910. The Coffee lake quad core processors have made nearly a 200% performance improvement in some cases, such a video rendering.

Many viewers have told me Apple consumers don’t care about the savings and are locked hard into the Apple ecosystem. Although that is true to some extent, it’s interesting to see the comments I receive from former Mac users who have jumped ship for a PC based laptop, myself included.  But that’s a story for another time.

Things I Do After Installing Fedora 28

Updated 2018-08-10

Although Fedora is a wonderful Linux distro and easily ready to perform most tasks after installation, there a few tweaks and additions I like to do after setup to make it even better. The changes that I make in this tutorial are what suit me best, so you don’t feel this is all you can do with Fedora.

  1. Update Fedora with the latest updates.
 sudo dnf update -y
sudo dnf group install kde-desktop-environment
  1. Give your system a permanent name.  The example below will set your hostname to “spock.”
 hostnamectl set-hostname spock
  1. After updates and installing KDE, add the RPM Fusion free and non-free  repositories to get additional software and support:
sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
  1. Install some useful sofware packages.  I typically use the CLI to do my install but you can just as easily use the GUI.  The packages I typically install are:
    • Guvcview – Great program to adjust settings of your webcam, such as my Logitech C920.
    • Chromium browser – The open source version of the Chrome browser and a great alternative to Firefox. Also helpful if you have problems with Firefox.
    • Rhythmbox – My favorite music player.
    • Kdenlive – The best open source non-linear video editor in my opinion. It also has the most features.
    • Gimp – The best image editor for Linux. It can be too advanced for some, but is certainly feature rich.
    • tlp – A very good power saving utility while on battery.  Highly recommended.
    • VLC – My preferred video player for Linux.
    • VirtualBox – A great program for creating virtual machines.  I like it because it’s cross-platform and easy to move VMs from one OS to another.
    • Audacity – A wonderful audio recorder and advanced editor.
    • OBS-Studio – A must have tool for any YouTuber or video producer.
    • Handbrake – This tool is great for converting video formats if necessary.
    • ntfs-3g – This package is for Windows NTFS partition support.
    • fuse-exfat – Used to support a popular format for USB thumb drives.
sudo dnf install -y guvcview chromium rhythmbox kdenlive gimp tlp vlc VirtualBox audacity obs-studio handbrake ntfs-3g fuse-exfat
  1. Now it’s time to start and enable tlp:


sudo tlp start; sudo systemctl enable tlp
  1. If you play games on Steam, you can easily install Steam using DNF:
sudo dnf install steam
  1. Although RPMFusion is a great place to get software, Flathub offers some programs and versions that are nwer than the traditional Fedora and RPMFusion repositories.  Flatpak support is buit in.  To get access, simply install the repo file from Flathub:
  1. Open the file and install. to see packages faster that are available from Flathub, run this command, then restart the Software app:
gnome-software --quit
  1. I used Flathub to install the latest version of Kdenlive to bypass a bug in the older version available on the RPMFusion repository.
  2. I install several gstreamer codecs to play various video files including .mov from Apple products.  If you edit using Kdenlive, you’ll need to install these to support various video file types and containers.
sudo dnf install gstreamer1-plugins-bad-free gstreamer1-plugins-bad-freeworld gstreamer1-plugins-bad-nonfree gstreamer1-plugins-base gstreamer1-plugins-good gstreamer1-plugins-good-gtk gstreamer1-plugins-ugly gstreamer1-plugins-ugly-free
  1. One plugin (openH264) for gstreamer is still needed, but can’t be downloaded from the repositories.  To fix this, simply go to the “Software” app, do a search for Gstreamer, then select GStreamer Multimedia Codecs – H.264 and install.
  2. Make GUI changes as desired. Some of the setting changes I make in KDE desktop are to set the mouse for two clicks (the default is one) change the minimize, maximize and close widgets, change the default screen lockout time, change power save settings, and change the desktop image.
  3. Watch for added steps here!

VPNFilter Malware: What You Need To Know To Protect Your Home Network

These days most homes and small businesses are using a SOHO style router to provide internet connectivity and sharing among multiple devices, such as computers, smart phones, tablets and gaming systems. Although there has been malware in the past that can infect your Router (also known as an IoT device – “Internet of Things”), VPNFilter is particularly dangerous, because it is capable of restarting even after a reboot of your router.

VPNFilter is quite capable of sticking around and can be very stubborn if nigh impossible to remove for the average internet user. What can you do to protect yourself, your home network, and your personally identifiable information (PII)?  We’ll go through 3 simple steps you can take now that will help secure your personal network from VPNFilter as well as any future malware that may surface.

To begin with, let’s talk about how VPNFilter works.  Created by the Russian hacking group “Fancy Bear,” VPNFilter was designed to operate in 3 stages.   The job of stage 1 is to setup a foothold in your router, by infecting it with a persistent program that will remain even after a reboot. Stage 1  then attempts to contact one of several fake accounts on photobucket.com or its backup site at toknowall.com.  In the event that neither of these sources are available, stage 1 then goes into “listen mode,” quietly waiting for a direct connection from the hacker.

Stage 2 is non-persistent, and is not capable of remaining after a reboot. We should consider ourselves lucky, as future malware might not be so poorly designed.  Stage 2 allows the hacker to have complete control over your router, including file collection, monitoring and self destruct capabilities as well.

Stage 3 adds even more capabilities through the use of router plugins.  These plugins can be used to coordinate attacks against internet infrastructure using multiple IoT devices.  For you personally, there is a known packet sniffer plugin that collects and transmits data flowing through your device, such as website credentials.  Another plugin is a kill command that could render all the infected routers useless.

Fortunately, the FBI has been very proactive regarding VPNFilter malware, and has been able to get the bogus Photobucket accounts closed and has seized the toknowall.com domain which should keep your router from reaching stage 3 level of infection. There is always the possibility of direct communication from a hacker, so it’s best to shore up your router’s defenses.

What can you to do to protect yourself?  Let’s outline 3 simple steps.

  1. Reboot your router immediately, even if you think it’s not infected.
  2. Update the firmware on your router using the method provided in your documentation.

    TP-Link Router FIrmware Upgrade
    The router upgrade screen for the author’s TP-Link RE580D, updated to the latest firmware version.
  3. Check and see if your router is on the list of infected devices. if it is, consider upgrading to a newer router in the future. Even if it is not on the list, complete steps 1-2 ASAP.

The list of affected routers according to Talos Intelligence is below. be sure to visit the link here for the most up to date listAs mentioned by Talos Intelligence, there is likely many more routers infected. Do not assume your router is safe. Complete steps 1 and 2 above to ensure you are as secure as you can be for now.

Linksys Devices:


Mikrotik RouterOS Versions for Cloud Core Routers:


Netgear Devices:


QNAP Devices:

TS439 Pro

Other QNAP NAS devices running QTS software

TP-Link Devices:



  1. New VPNFilter malware targets at least 500K networking devices worldwide
  2. Exclusive: FBI Seizes Control of Russian Botnet

  3. U.S. seeks to take control of infected routers from hackers

  4. FBI Seizes Control of Russian Botnet


GPL vs. McHardy: How One Developer can Damage the Concept of Free Software and the GPL

Today we’re going to talk about an issue that has raised concern in the Linux community in general and also companies that use open source software (also known as FOSS) in their products. Many now fear that a recent legal battle between a company using Linux in their products and a single developer could greatly impair the adoption and development of Linux.

Linux developer Patrick McHardy brought China based company Geniatech to court in Germany due to an alleged violation of the GNU Public License version 2 known as GPLv2. Geniatech uses Linux for some of its satellite TV receivers sold in Europe by Geniatech Euorope. Allegedly, Geniatech provided only a binary of their modified version of Linux. The GPLv2 license requires that any modifications of the Linux source code be made available to the public.

McHardy’s suit had some validity, because he contributed some of the source code in Geniatech’s products. Specifically, he was a part of the netfilter and iptables core team, a firewall component built in to the version of Linux used by Geniatech2. The software used by Geniatech was of course, protected by the GPLv2 license.

The Gnu Public License version 2 is often referred to as a “copyleft” license, a play on words of the “copyright” license. The general concept is that the software designed under GPLv2 gives anyone the right to freely distribute copies and modified versions, provided the source code of any modification is made available. The idea is to allow Linux or any GPLed software to grow and become better by contributions made by many disparate developers.

As you can imagine, a legal issue arises when a company uses GPLed source code, modifies it, but does not offer their modifications and improvements to the general developer community. Typically when a company is found to be in violation of the GPL license, these violations are reported to the Free Software Foundation (FSF) or the Software Freedom Conservancy (SFC)1. The idea here is to use these organizations to gently move a company or individual in violation of the GPL towards a non-litigious resolution.

But why would the FSF and SFC not immediately use the courts to address GPL violations? To find the answer, we have to look at the reason Linux and other GPLed software is available in the first place. The idea is to provide quality software at a price anyone can afford. It would stand to reason that developers and users of GPLed software wish to see it grow and become more useful. In order for GNU Software and Linux to continue to expand, more developers and more users are a necessity.

Much of the time, it is companies that provide further development and innovation for GPLed software. The fear is companies will abandon use of GPL licensed software on the grounds that any one developer can sue for monetary damages if the GPL license if violated. But that’s a good thing, one might think. If a company is fearful of being sued, the GPL license is doing its job. But is it? If companies choose not to use GPL software, who will? Would GNU and Linux be left to enthusiasts and shrink to a tiny user base smaller than it already is?

That is why the FSF and SFC were surprised by McHardy’s actions. Initially, McHardy was interested in working with the FSF and SFC to address the alleged GPL infringement by Geniatech. Eventually, he stopped answering his phone or responding to emails from either the FSF and SFC or the Netfilter Core team2. Eventually the Netfilter team received credible information that McHardy was using GPL copyright to sue several companies for compensatory damages relating to his source code contributions to Netfilter. As a result, he was suspended from the Netfilter team until he addressed the allegations against him4.

In July of 2017, McHardy made a “test purchase” of a product sold by Geniatech, and found modified source code in binary form on the device. Geniatech had not offered the source code to the public as required by the GPL license; as a result, McHardy filed for an injunction against Geniatech Europe on the grounds that their use of the software was in violation3.

The Court of Cologne (OLG) made it clear they did understand the concept of GNU, Linux, the GPL license, and that the concept of co-authorship of Linux alleged by McHardy3. McHardy overstepped his bounds as a member of the netfilter development team, a small but important component of the overall GNU Linux operating system. By alleging he was a co-author of Linux, McHardy attempted to put himself in the position of a major rights holder of the entire GNU Linux OS used by Geniatech.

The court dismissed this claim, as well as the concept of co-authorship. “The Linux kernel development model does not support the claim of Patrick McHardy having co-authored Linux. In so far, he is only an editing author…and not a co-author. Nevertheless, even an editing author has the right to ask for cease and desist, but only on those portions that he authored/edited, and not on the entire Linux kernel.3” The court goes on to say: “The plaintiff being a member of the netfilter core team or even the head of the core team still doesn’t support the claim of being a co-author, as netfilter substantially existed since 1999, three years before Patrick’s first contribution to netfilter, and five years before joining the core team in 2004.3

Additionally, the court maintained that just being a member of a core team (or a maintainer) does not immediately grant one a copyright over source code. According to the court, McHardy also did not “substantiate what copyrightable contributions he has made outside of Netfilter/iptables. His mere listing as general networking subsystem maintainer does not clarify what his copyrightable contributions were2,3.” In other words, there was no verifiable proof that McHardy had any legal claim to the greater Linux code beyond his contributions to Netfilter.

Geniatech, in its own defense, showed substantial claims that Mr. McHardy was attempting to profit from the court proceedings, as was evidenced by 38 similar cases he has filed against companies in the past. Geniatech also showed that in one court case, McHardy requested a 2 million EUR penalty. This of course is contrary to the desires of the FSF and SFC, which hopes to bring a company into compliance without the use of courts, at least initially.

Given this evidence, the court then recommended “that it might be better to have regular main proceedings, in which expert witnesses can be called and real evidence has to be provided, as opposed to the constraints of the preliminary procedure that was applied currently.3” Given that McHardy was now faced with a significantly more expensive and time consuming litigation, he opted instead to withdraw his injunction. He will still have to pay all court costs including those by the the defendant, Geniatech.

The FSF, SFC, core teams and developers alike were relieved when McHardy decided to withdraw his injunction against Geniatech. Although Geniatech is in the wrong regarding their use of GPL software, they were willing to put up a protracted battle that McHardy did not have the stomach for. I for one agree with the outcome. If McHardy had been successful in pushing his claim through the courts, many companies would immediately rethink their strategy of using GPLed source code. As it is, just the possibility of costly litigation by a rogue developer has soured the advantage that free and open source software has. Any good company could and should reassess the risk of using GPLed source code in their products. If a company like Geniatech were to choose to use Windows embedded at a greater expense initially, at least they would know rogue programmers from Microsoft won’t be capable of suing individually.

We’ll leave with a quote from Linux developer Greg Kroah-Hartman: “The community is not out for financial gain when it comes to license issues – though we do care about the company coming into compliance.  All we want is the modifications to our code to be released back to the public, and for the developers who created that code to become part of our community so that we can continue to create the best software that works well for everyone.5

What are your thoughts on this topic? Does the attempt to sue Geniatech concern you? Drop me a comment and let me know what you think.


  1. The Principles of Community-Oriented GPL Enforcement. https://www.fsf.org/licensing/enforcement-principles
  2. Linux beats legal threat from one of its own developers. http://www.zdnet.com/article/linux-beats-internal-legal-threat/
  3. Report from the Geniatech vs. McHardy GPL violation court hearing. http://laforge.gnumonks.org/blog/20180307-mchardy-gpl/
  4. Suspending Patrick McHardy as a coreteam member. https://marc.info/?l=netfilter-devel&m=146887464512702#1
  5. Linux Kernel Community Enforcement Statement FAQ. http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement-faq/
  6. The Importance of Following Community-Oriented Principles in GPL Enforcement. https://sfconservancy.org/blog/2016/jul/19/patrick-mchardy-gpl-enforcement/



Notes on Installing Fedora 27 on the Lenovo Yoga 920

I try to stay as close to a stock Fedora experience as possible, with a few minor changes. The reason for this is so I will not have much setup to do if I get a new system or experience a catastrophic failure. I choose my tools and environment carefully and as a result a complete install as listed below usually takes me under an hour.  A video of this install guide is available here: https://www.youtube.com/watch?v=73mfFSUtXJg

  1. Shrink the Windows NTFS partition while in Windows. I have a 512 GB SSD, I shrunk the NTFS partition down to 200 GB to give me 300GB for my Fedora setup.
  2. Create the bootable USB with the Fedora install. I use the Fedora USB installer tool in Fedora to create the drive. You can also download the USB Media Install tool by clicking the “Workstation” download link at fedoraproject.org. The tool will step you through the creation of bootable Fedora USB media.
  3. Turn off Intel Secure boot. This may have an impact on the installation of Fedora.
  4. Follow the steps to install Fedora as usual. The installer hasn’t really changed in the last five or so versions. Installing Fedora Linux: https://youtu.be/eTYOrIFABhU?t=4m22s
  5. Once the install is completed, you will need to create a blacklist file in /etc/modprobe.d with blacklist ideapad_laptop in order to get the WiFi card working. For a quick fix that doesn’t require a reboot, issue this command: sudo modprobe -r ideapad_laptop  This has been a problem with Yoga laptops for a while. The problem will be fixed with step 6.
  6. dnf update -y don’t skip this step or number 7. Fedora 27 is not stable from the stock iso image unless you do the updates.
  7. Reboot.
  8. hostnamectl set-hostname fedoraiscool Use this command to set your hostname to whatever you want. The command as used would set your host to fedoraiscool.
  9. dnf group install kde-desktop-environment My preference is KDE Plasma for the desktop. You can also just download the KDE spin of Fedora, but this is how I choose to do it. At times I will use the default Gnome 3.x desktop.
  10. Enable rpmfusion: sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
  11. Install some software packages: sudo dnf install -y guvcview chromium rhythmbox kdenlive gimp tlp vlc VirtualBox audacity obs-studio handbrake  There are many more software packages to choose from, these are the ones I typically use in my daily workflow that are not already installed by default.
  12. Start the tlp battery saver service: sudo tlp start
  13. Enable tlp at startup: sudo systemctl enable tlp
  14. Disable Bitlocker so you can mount your Windows partition in Linux. Please note that my Yoga 920 was shipped with Bitlocker enabled on the C: drive. You may have to use Disk Management in Windows to turn off Bitlocker which can take a while.
  15. Mount your windows partition. Make a mount point for your windows partition: sudo mkdir /mnt/windows Then, sudo mount /dev/nvme0n1p3 /mnt/windows  It may vary as to which device it is. Use sudo fdisk -l to determine which partition you need to mount. Or lsblk or sudo fdisk -l | more to control output.
  16. Make GUI changes as desired. Some of the setting changes I make in KDE desktop are to set the mouse for two clicks (the default is one) change the minimize, maximize and close widgets, change the default screen lockout time, change power save settings, and change the desktop image.
  17. Optionally, install Steam if you would like access to your Linux based Steam games while in Fedora. In the CLI, simply type sudo install steam


Setting up a Samba Server in Linux for Your Home Network

In this tutorial, we’ll setup a Samba share you can use to access files stored on a local system anywhere on the local network. We’ll also take it a step further by creating another share that is read only.

  1.  Using terminal, run: sudo dnf install -y samba samba-client
  2.  If you want Samba server to startup automatically at boot, run sudo systemctl smb nmb enable
  3. Add an entry into the firewall to allow access from other systems on the network: firewall-cmd –add-service=samba –permanent
  4. Reload the firewall service: firewall-cmd reload
  5. Allow home directory access for local users in SELinux: setsebool -P samba_enable_home_dirs on
  6. Create access for a user account already on your Linux system: pdbedit -a <user> where ‘user’ is the name of your local Linux user account you want to allow access to. Type in the password when prompted.
  7. Next restart the smb and nmb services to load changes made to Samba: sudo systemctl restart smb nmb
  8. Test access from another computer i.e.:  Windows – Launch Explorer, type \\  where ‘ip address’ is the IP of your Linux Samba server.

Adding a new share to Samba Server:

  • Edit smb.conf: vi /etc/samba/smb.conf  (You can also use Nano editor if it’s installed)
  • At the very bottom, insert a new section:


comment= New share on Fedora Server

path = /path/to/new/share/

read only = yes (no if you want to have write access)

guest only = no

guest ok = yes (no if you don’t want guests to have access)

share nodes = yes

  • We need to set another SELinux permission. This is very broad and not as secure as it should be. If you want to get more granular, put SELinux in permissive mode, access your new share, then read the SELinux error message that comes up*.
  • run sudo setsebool -P samba_export_all_ro 1
  • restart Samba: systemctl restart smb nmb

*For specific information on how to set a more granular SELinux entry for your specific share, check out this video on how to setup a Samba share on your home network.

Continue reading “Setting up a Samba Server in Linux for Your Home Network”

Welcome to my Universe!

276 tech videos in 2 years. Not too bad!

It’s been 2 years this April 2018 since I created the FastGadgets YouTube channel.  I really did it on a whim, not sure what it might become.  Of course, I always had hopes it would become something huge, a full time endeavor I could use as my primary income.  It hasn’t quite become that yet, but it’s getting closer.

Since that fateful day in April of 2016, I created 276 tech related videos (as of 18th January, 2018).  That’s an average of 13 videos a month!  It seems trivial, but the time, effort, and love I’ve put into each video has been a joy.  I would say the best part is the viewers, who’ve really made the journey so much fun.

I’d been planning on doing something with the fastgadgets.info domain I purchased years back, but never got around to it.  It’s been parked on my YouTube channel, so anyone who went to fastgadgets.info would simply be forwarded to YouTube.  The thing is, I began to realize all these videos and tutorials I do really deserve to have some kind of corresponding blog post.

Many of them are “how to” Linux videos, and I really have been meaning to create a text based repository of Linux knowledge that I’ve learned over the years.  Also, I realized I do so much more than just Linux.  I’ve got quite a few videos on Mac related content, Windows, networking and some security as well.

The plan here on out is to develop content here for you,  the aspiring IT person, as well as the everyday user who just needs a quick bit of information to get things going.  I hope you find this site useful, and look forward to offering up my knowledge in the hopes it might help someone out, even if just a little bit.

Thanks for visiting!