All posts by bernard

Ubuntu – update-alternatives error with package corrupted

In Ubuntu 16.04, if during a package installation, you got the following error.

update-alternatives: error: /var/lib/dpkg/alternatives/rename corrupt: invalid status
dpkg: error processing package perl (--configure):
 subprocess installed post-installation script returned error exit status 2

Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)

In this current case, it is the package “rename” with the package “perl”, but it probably can be used for any package.


sudo mv /var/lib/dpkg/alternatives/rename /var/lib/dpkg/alternatives/rename.old
sudo apt-get install rename
sudo apt-get install perl

Running .NET Core 2 MVC Application in NGINX Location

NGINX Configuration

In the following paragraphs, we suppose the ASP.NET MVC application is accessible from the URL:

In the /etc/nginx/sites-available/ configuration file, set the following location.

location /mvcmovie/ {
   proxy_http_version 1.1;
   proxy_set_header Upgrade $http_upgrade;
   proxy_set_header Connection keep-alive;
   proxy_set_header Host $http_host;
   proxy_cache_bypass $http_upgrade;

The slash characters at the end of the location name and “proxy_pass” URL are necessary.


In the Configure method of the Startup class (Startup.cs file), add app.UsePathBase with the name of the location set up in the NGINX configuration file. Do not add a slash character at the end.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
   app .UseForwardedHeaders(newForwardedHeadersOptions
   {  ForwardedHeaders=ForwardedHeaders.XForwardedFor|ForwardedHeaders.XForwardedProto
   if (env.IsDevelopment())
   app.UseMvc(routes =>
         name: "default",
         template: "{controller=Home}/{action=Index}/{id?}");


Deploy .NET Core 2 MVC Application on Linux ARM with NGINX

The following procedure has been used on Linux Mint 18.3 (Ubuntu 16.04 x86) for the development and Ubuntu 16.04.3 armv7l (ODROID HC1 device) for the deployment.

To develop a .NET Core 2 with Visual Studio Code on Linux x86, follow the Microsoft tutorial  here.

Publish your application for Linux ARM:

dotnet publish -c Release -r linux-arm

Copy the content of the “./bin/Release/netcoreapp2.0/linux-arm/publish” folder to your production server (ODROID for example).

Install nginx

sudo apt-get install nginx


If optional nginx modules will be installed, building nginx from source might be required.

Use apt-get to install nginx. The installer creates a System V init script that runs nginx as daemon on system startup. Since nginx was installed for the first time, explicitly start it by running:

sudo service nginx start

Verify a browser displays the default landing page for nginx.

Configure nginx

To configure nginx as a reverse proxy to forward requests to our ASP.NET Core app, modify /etc/nginx/sites-available/default. Open it in a text editor, and replace the contents with the following:

server {
    listen 80;
    location / {
        proxy_pass http://localhost:5000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection keep-alive;
        proxy_set_header Host $http_host;
        proxy_cache_bypass $http_upgrade;

This nginx configuration file forwards incoming public traffic from port 80 to port 5000.

Once the nginx configuration is established, run sudo nginx -t to verify the syntax of the configuration files. If the configuration file test is successful, force nginx to pick up the changes by running sudo nginx -s reload.

Monitoring the app

The server is setup to forward requests made to http://<serveraddress>:80 on to the ASP.NET Core app running on Kestrel at However, nginx is not set up to manage the Kestrel process. systemd can be used to create a service file to start and monitor the underlying web app. systemd is an init system that provides many powerful features for starting, stopping, and managing processes.

Create the service file

Create the service definition file:

sudo nano /etc/systemd/system/kestrel-hellomvc.service

The following is an example service file for the app:

Description=Example .NET Web API App running on Ubuntu

ExecStart=/usr/bin/dotnet /var/aspnetcore/hellomvc/hellomvc.dll
RestartSec=10  # Restart service after 10 seconds if dotnet service crashes


Note: If the user www-data is not used by the configuration, the user defined here must be created first and given proper ownership for files.

Save the file and enable the service.

systemctl enable kestrel-hellomvc.service

Start the service and verify that it is running.

systemctl start kestrel-hellomvc.service
systemctl status kestrel-hellomvc.service

● kestrel-hellomvc.service - Example .NET Web API App running on Ubuntu
    Loaded: loaded (/etc/systemd/system/kestrel-hellomvc.service; enabled)
    Active: active (running) since Thu 2016-10-18 04:09:35 NZDT; 35s ago
Main PID: 9021 (dotnet)
    CGroup: /system.slice/kestrel-hellomvc.service
            └─9021 /usr/local/bin/dotnet /var/aspnetcore/hellomvc/hellomvc.dll

With the reverse proxy configured and Kestrel managed through systemd, the web app is fully configured and can be accessed from a browser on the local machine at http://localhost. It is also accessible from a remote machine, barring any firewall that might be blocking. Inspecting the response headers, the Server header shows the ASP.NET Core app being served by Kestrel.

HTTP/1.1 200 OK
Date: Tue, 11 Oct 2016 16:22:23 GMT
Server: Kestrel
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Transfer-Encoding: chunked

Viewing logs

Since the web app using Kestrel is managed using systemd, all events and processes are logged to a centralized journal. However, this journal includes all entries for all services and processes managed by systemd. To view the kestrel-hellomvc.service-specific items, use the following command:

sudo journalctl -fu kestrel-hellomvc.service

For further filtering, time options such as --since today--until 1 hour ago or a combination of these can reduce the amount of entries returned.

sudo journalctl -fu kestrel-hellomvc.service --since "2016-10-18" --until "2016-10-18 04:00"


Install .NET Core 2 on Ubuntu 16.04 ARM

This post is an extract of the article “Installing Ubuntu 16.04 on a Raspberry Pi 3, installing .NET Core 2, and running a sample .NET Core 2 app” (here) written by Jeremy Lindsay.

He tested .NET Core 2 on Raspberry Pi 3. I tested the framework on ODROID-HC1.

.NET Core 2 Installation

# Update Ubuntu 16.04
sudo apt-get -y update

# Install the packages necessary for .NET Core
sudo apt-get -y install libunwind8 libunwind8-dev gettext libicu-dev liblttng-ust-dev libcurl4-openssl-dev libssl-dev uuid-dev

# Download the latest binaries for .NET Core 2

# Make a directory for .NET Core to live in
sudo mkdir /usr/local/lib/dotnet

# Unzip the binaries into the directory we just created
sudo tar -xvf dotnet-runtime-latest-linux-arm.tar.gz -C /usr/local/lib/dotnet

# Now add the path to the dotnet executable to the environment path
# This ensures the next time you log in, the dotnet exe is on your path
echo "PATH=\$PATH:/usr/local/lib/dotnet" >>
sudo mv /etc/profile.d

Then run the command below to  add the path to the dotnet executable to the current session


Test the .NET Core 2 installation

You can now test the framework.

dotnet --info

You should see the following information.


Ubuntu – Mise à jour automatique

Mises à jour automatiques

Le paquet unattended-upgrades peut être utilisé pour installer automatiquement les mises à jour de paquets. Il peut être configuré pour mettre à jour tous les paquets ou uniquement les mises à jour de sécurité. Installez d’abord le paquet en saisissant dans un terminal :

sudo apt install unattended-upgrades

Pour paramétrer unattended-upgrades, ouvrez /etc/apt/apt.conf.d/50unattended-upgrades et modifiez ce fichier à votre convenance :

Unattended-Upgrade::Allowed-Origins {
//      "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";

Certains paquets peuvent être mis en liste noire et ne seront donc pas mis à jour automatiquement. Pour mettre un paquet en liste noire, ajoutez le à la liste :

Unattended-Upgrade::Package-Blacklist {
//      "vim";
//      "libc6";
//      "libc6-dev";
//      "libc6-i686";

Le double « // » sert à commenter, donc tout ce qui suit « // » ne sera pas pris en compte.

To enable automatic updates, edit /etc/apt/apt.conf.d/20auto-upgrades and set the appropriate apt configuration options:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

The above configuration updates the package list, downloads, and installs available upgrades every day. The local download archive is cleaned every week. On servers upgraded to newer versions of Ubuntu, depending on your responses, the file listed above may not be there. In this case, creating a new file of this name should also work.

Vous pouvez obtenir plus d’informations à propos des options de configuration de la périodicté de l’application apt dans l’en-tête du script /etc/cron.daily/apt.

Le résultat des mises à jour automatiques sera journalisé dans /var/log/unattended-upgrades.



Le paramétrage de Unattended-Upgrade::Mail dans /etc/apt/apt.conf.d/50unattended-upgrades permettra l’envoi d’un courriel à l’administrateur détaillant tous les paquets pouvant être mis à jour ou ayant un problème.

Un autre paquet très utile est le paquet apticronapticron permet de configurer une tâche cron pour envoyer un courriel à un administrateur sur n’importe quel paquet ayant des mises à jour disponibles, ou pour afficher un résumé des modifications de chaque paquet.

Pour installer apticron, saisissez dans un terminal :

sudo apt install apticron

Une fois que le paquet est installé, vous pouvez modifier l’adresse de courriel et d’autres options en modifiant /etc/apticron/apticron.conf :


Linux – Remote Synchronizing Files (Rsync)

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems. With the help of rsync command you can copy and synchronize your data remotely and locally across directories, across disks and networks, perform data backups and mirroring between two Linux machines.

This article explains 10 basic and advanced usage of the rsync command to transfer your files remotely and locally in Linux based machines. You don’t need to be root user to run rsynccommand.

Some advantages and features of Rsync command
  1. It efficiently copies and sync files to or from a remote system.
  2. Supports copying links, devices, owners, groups and permissions.
  3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
  4. Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.
Basic syntax of rsync command
# rsync options source destination
Some common options used with rsync commands
  1. -v : verbose
  2. -r : copies data recursively (but don’t preserve timestamps and permission while transferring data
  3. -a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
  4. -z : compress file data
  5. -h : human-readable, output numbers in a human-readable format

Suggested Read: How to Sync Files/Directories Using Rsync with Non-standard SSH Port

Install rsync in your Linux machine

We can install rsync package with the help of following command.

# yum install rsync (On Red Hat based systems)
# apt-get install rsync (On Debian based systems)

1. Copy/Sync Files and Directory Locally

Copy/Sync a File on a Local Computer

This following command will sync a single file on a local machine from one location to another location. Here in this example, a file name backup.tar needs to be copied or synced to /tmp/backups/ folder.

[root@tecmint]# rsync -zvh backup.tar /tmp/backups/
created directory /tmp/backups
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

In above example, you can see that if the destination is not already exists rsync will create a directory automatically for destination.

Copy/Sync a Directory on Local Computer

The following command will transfer or sync all the files of from one directory to a different directory in the same machine. Here in this example, /root/rpmpkgs contains some rpm package files and you want that directory to be copied inside /tmp/backups/ folder.

[root@tecmint]# rsync -avzh /root/rpmpkgs /tmp/backups/
sending incremental file list
sent 4.99M bytes  received 92 bytes  3.33M bytes/sec
total size is 4.99M  speedup is 1.00

2. Copy/Sync Files and Directory to or From a Server

Copy a Directory from Local Server to a Remote Server

This command will sync a directory from a local machine to a remote machine. For example: There is a folder in your local computer “rpmpkgs” which contains some RPM packages and you want that local directory’s content send to a remote server, you can use following command.

[root@tecmint]$ rsync -avz rpmpkgs/ root@
root@'s password:
sending incremental file list
sent 4993369 bytes  received 91 bytes  399476.80 bytes/sec
total size is 4991313  speedup is 1.00
Copy/Sync a Remote Directory to a Local Machine

This command will help you sync a remote directory to a local directory. Here in this example, a directory /home/tarunika/rpmpkgs which is on a remote server is being copied in your local computer in /tmp/myrpms.

[root@tecmint]# rsync -avzh root@ /tmp/myrpms
root@'s password:
receiving incremental file list
created directory /tmp/myrpms
sent 91 bytes  received 4.99M bytes  322.16K bytes/sec
total size is 4.99M  speedup is 1.00

3. Rsync Over SSH

With rsync, we can use SSH (Secure Shell) for data transfer, using SSH protocol while transferring our data you can be ensured that your data is being transferred in a secured connection with encryption so that nobody can read your data while it is being transferred over the wire on the internet.

Also when we use rsync we need to provide the user/root password to accomplish that particular task, so using SSH option will send your logins in an encrypted manner so that your password will be safe.

Copy a File from a Remote Server to a Local Server with SSH

To specify a protocol with rsync you need to give “-e” option with protocol name you want to use. Here in this example, We will be using “ssh” with “-e” option and perform data transfer.

[root@tecmint]# rsync -avzhe ssh root@ /tmp/
root@'s password:
receiving incremental file list
sent 30 bytes  received 8.12K bytes  1.48K bytes/sec
total size is 30.74K  speedup is 3.77
Copy a File from a Local Server to a Remote Server with SSH
[root@tecmint]# rsync -avzhe ssh backup.tar root@
root@'s password:
sending incremental file list
sent 14.71M bytes  received 31 bytes  1.28M bytes/sec
total size is 16.18M  speedup is 1.10

Suggested Read: Use Rsync to Sync New or Changed/Modified Files in Linux

4. Show Progress While Transferring Data with rsync

To show the progress while transferring the data from one machine to a different machine, we can use ‘–progress’ option for it. It displays the files and the time remaining to complete the transfer.

[root@tecmint]# rsync -avzhe ssh --progress /home/rpmpkgs root@
root@'s password:
sending incremental file list
created directory /root/rpmpkgs
1.02M 100%        2.72MB/s        0:00:00 (xfer#1, to-check=3/5)
99.04K 100%  241.19kB/s        0:00:00 (xfer#2, to-check=2/5)
1.79M 100%        1.56MB/s        0:00:01 (xfer#3, to-check=1/5)
2.09M 100%        1.47MB/s        0:00:01 (xfer#4, to-check=0/5)
sent 4.99M bytes  received 92 bytes  475.56K bytes/sec
total size is 4.99M  speedup is 1.00

5. Use of –include and –exclude Options

These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.

Here in this example, rsync command will include those files and directory only which starts with ‘R’ and exclude all other files and directory.

[root@tecmint]# rsync -avze ssh --include 'R*' --exclude '*' root@ /root/rpm
root@'s password:
receiving incremental file list
created directory /root/rpm
sent 67 bytes  received 167289 bytes  7438.04 bytes/sec
total size is 434176  speedup is 2.59

6. Use of –delete Option

If a file or directory not exist at the source, but already exists at the destination, you might want to delete that existing file/directory at the target while syncing .

We can use ‘–delete‘ option to delete files that are not there in source directory.

Source and target are in sync. Now creating new file test.txt at the target.

[root@tecmint]# touch test.txt
[root@tecmint]# rsync -avz --delete root@ .
receiving file list ... done
deleting test.txt
sent 26 bytes  received 390 bytes  48.94 bytes/sec
total size is 45305958  speedup is 108908.55

Target has the new file called test.txt, when synchronize with the source with ‘–delete‘ option, it removed the file test.txt.

7. Set the Max Size of Files to be Transferred

You can specify the Max file size to be transferred or sync. You can do it with “–max-size” option. Here in this example, Max file size is 200k, so this command will transfer only those files which are equal or smaller than 200k.

[root@tecmint]# rsync -avzhe ssh --max-size='200k' /var/lib/rpm/ root@
root@'s password:
sending incremental file list
created directory /root/tmprpm
sent 189.79K bytes  received 224 bytes  13.10K bytes/sec
total size is 38.08M  speedup is 200.43

8. Automatically Delete source Files after successful Transfer

Now, suppose you have a main web server and a data backup server, you created a daily backup and synced it with your backup server, now you don’t want to keep that local copy of backup in your web server.

So, will you wait for transfer to complete and then delete those local backup file manually? Of Course NO. This automatic deletion can be done using ‘–remove-source-files‘ option.

[root@tecmint]# rsync --remove-source-files -zvh backup.tar /tmp/backups/
sent 14.71M bytes  received 31 bytes  4.20M bytes/sec
total size is 16.18M  speedup is 1.10
[root@tecmint]# ll backup.tar
ls: backup.tar: No such file or directory

9. Do a Dry Run with rsync

If you are a newbie and using rsync and don’t know what exactly your command going do. Rsync could really mess up the things in your destination folder and then doing an undo can be a tedious job.

Suggested Read: How to Sync Two Apache Web Servers/Websites Using Rsync

Use of this option will not make any changes only do a dry run of the command and shows the output of the command, if the output shows exactly same you want to do then you can remove ‘–dry-run‘ option from your command and run on the terminal.

root@tecmint]# rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/
sent 35 bytes  received 15 bytes  100.00 bytes/sec
total size is 16.18M  speedup is 323584.00 (DRY RUN)

10. Set Bandwidth Limit and Transfer File

You can set the bandwidth limit while transferring data from one machine to another machine with the the help of ‘–bwlimit‘ option. This options helps us to limit I/O bandwidth.

[root@tecmint]# rsync --bwlimit=100 -avzhe ssh  /var/lib/rpm/  root@
root@'s password:
sending incremental file list
sent 324 bytes  received 12 bytes  61.09 bytes/sec
total size is 38.08M  speedup is 113347.05

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync whole file then you use ‘-W‘ option with it.

[root@tecmint]# rsync -zvhW backup.tar /tmp/backups/backup.tar
sent 14.71M bytes  received 31 bytes  3.27M bytes/sec
total size is 16.18M  speedup is 1.10

That’s all with rsync now, you can see man pages for more options. Stay connected with Tecmintfor more exciting and interesting tutorials in future. Do leave your comments and suggestions.

Original article:

Linux Mint – Installing NFS Services

NFS services is used to share folders between Linux hosts.


1. Install NFS server:
sudo apt-get install nfs-kernel-server

2. Edit /etc/exports to set up what directories you want shared on your network, mine looks like this:
sudo vim /etc/exports


Note: Change the part to match your network if needed.

3. Start the server.
sudo service nfs-kernel-server start


First is to figure out where exactly you would want the directories to appear. I decided to put them inside of /mnt/servername (don’t put them directly in /mnt. It will mess up any mounts that are already there).

1. Install the nfs client and automount utilities:
apt-get install nfs-common autofs

2. Edit /etc/auto.master to define the file for the NFS shares.
sudo vim /etc/auto.master

I commented “+dir:/etc/auto.master.d” and “+auto.master” with “#”‘s for this setup as they just produced non-fatal errors anyway.

3. Add the line at the bottom similar to this:
/mnt/servername /etc/auto.nfs --ghost

/mnt/servername is the “key”, where the directories will appear when mounted. The ghost option creates the directories and makes them visible for easier use. /etc/auto.nfs is the location of the file we will create next. The name can be whatever you want.

4. Edit /etc/auto.nfs
sudo vim /etc/auto.nfs

Make it look something like this, modifying for your environment of course:


This will create 2 directories, /mnt/servername/www and /mnt/servername/data, and map them to the exports defined in the server earlier. I used my server IP address to avoid any DNS issues that might be lurking, but in reality you might want to use the server name instead.

5. Restart the autofs service.

sudo service autofs restart

Original article:

Debug PHP and Javascript into Visual Studio Code

Creating a task for launching the PHP server.

To create a task, see

tasks.json content file:

    "version": "0.1.0",
    "tasks": [{
        "taskName": "Php",
        "command": "/usr/bin/php",
        "args": [
            "-S", "",
            "-t", "${workspaceRoot}"

The command path depends of the OS. On Windows, it would be: “C:\Program Files\PHP\php.exe”.

The port number (8080), on which the server is listening, can be any port number you want. The port must not be used by another application.

The tasks.json file is located into the “.vscode” of the root folder of your project.

Enable XDebug for PHP


After following the procedure described in the Visual Studio Code documentation, a launch.json has been generated.

launch.json content file:

    "version": "0.2.0",
    "configurations": [

            "name": "Listen for XDebug",
            "type": "php",
            "request": "launch",
            "port": 9000
            "name": "Launch currently open script",
            "type": "php",
            "request": "launch",
            "program": "${file}",
            "cwd": "${fileDirname}",
            "port": 9000

The launch.json file is located into the “.vscode” of the root folder of your project.

Settings for Running in Chrome

Google Chrome has to be installed on the computer.

You can debug your client side code using a browser debugger such as Debugger for Chrome.

Add those fieds into the launch.json file.

            "name": "Launch Frontend",
            "type": "chrome",
            "request": "launch",
            "url": "http://localhost:8080/index.php",
            "webRoot": "${workspaceRoot}",
            "preLaunchTask": "Php"

The port number in the URL depends on the one defined previously in the Php task.
The “preLaunchTask” will run the PHP server before launching the “index.php” page in the Chrome web browser.

Final launch.json file:

    "version": "0.2.0",
    "configurations": [

            "name": "Listen for XDebug",
            "type": "php",
            "request": "launch",
            "port": 9000
            "name": "Launch currently open script",
            "type": "php",
            "request": "launch",
            "program": "${file}",
            "cwd": "${fileDirname}",
            "port": 9000
            "name": "Launch Frontend",
            "type": "chrome",
            "request": "launch",
            "url": "http://localhost:8080/index.php",
            "webRoot": "${workspaceRoot}",
            "preLaunchTask": "Php"

In Windows, you can also debug your client side by using a Debugger for Edge.

Debug on Client and Server Side

Set breakpoints into your PHP code and Javascript code.

In the left bar, click on the Debug icon.

In the drop down list, select “Listen for XDebug” and click on the green arrow button.

In the same drop down list, select “Launch Frontend”.

Click on the green arrow button to start the PHP server.

Click a second time on the same button to launch your “index.php” page into Google Chrome.

Your debug session has been started and your process should stop on any breakpoints, PHP and Javascript, you set up in your code.