copying to nfs share makes ubuntu desktop unresponsive

OS: ubuntu 16.10 x86_64

the whole unity desktop got stuck when copying large files to a nfs share over 1gbit ethernet adapter

found the solution here:

B10. Sometimes my server gets slow or becomes unresponsive, then comes back to life. I’m using NFS over UDP, and I’ve noticed a lot of IP fragmentation on my network. Is there anything I can do?
A. UDP datagrams larger than the IP Maximum Transfer Unit (MTU) must be divided into pieces that are small enough to be transmitted. If, for example, your network’s MTU is 1524 bytes, the Linux IP layer must break UDP datagram larger than 1524 bytes into separate packets, all of which must be smaller than the MTU. These separated packets are called fragments.

The Linux IP layer transmits each fragment as it is breaking up a UDP datagram, encoding enough information in each fragment so that the receiving end can reassemble the individual fragments into the original UDP datagram. If something happens that prevents a client from continuing to fragment a packet (e.g., the output socket buffer space in the IP layer is exceeded), the IP layer stops sending fragments. In this case, the receiving end has a set of fragments that is incomplete, and after a certain time window, it will drop the fragments if it does not receive enough to assemble a complete datagram. When this occurs, the UDP datagram is lost. Clients detect this loss when they have not received a reply from the server after a certain time interval, and recover by retransmitting the datagram.

Under heavy write loads, the Linux NFS client can generate many large UDP datagrams. This can quickly exhaust output socket buffer space on the client. If this occurs many times in a short time, the client sends the server a large number of fragments, but almost never gets a whole datagram’s worth of fragments to the server. This fills the server’s IP reassembly queue, causing it to become unreachable via UDP until it expels the useless fragments from the queue.

Note that the same thing can occur on servers that are under a heavy read load. If the server’s output socket buffers are too small, large reads will cause them to overflow during IP fragmentation. The client’s IP reassembly queue then fills with worthless fragments, and little UDP traffic can get to the client.

Here are some symptoms of this problem:

You use NFS over UDP with a large wsize (relative to the network’s MTU), and your application workload is write-intensive, or with a large rsize with a read-intensive application.
You may see many fragmentation errors on your server or clients (netstat -s will tell the story).
Your server may periodically become very slow or unreachable.
Increasing the number of threads on your server has no effect on performance.
One or a small number of clients seem to make the server unusable.
The network path between your client and server may have a router or switch with small port buffers, or the path may contain links that run at different speeds (100Mb/s and GbE).
The fix is to make the Linux’s IP fragmentation logic continue fragmenting a datagram even when output socket buffer space is over its limit. This fix appears in kernels newer than 2.4.20. You can work around this problem in one of several ways:

Use NFS over TCP. TCP does not use fragmentation, so it does not suffer from this problem. Using TCP may not be possible with older Linux NFS clients and servers that only support NFS over UDP.
If you can’t use NFS over TCP, upgrade your clients to 2.4.20 or later.
If you can’t upgrade your clients, increase the default size of your client’s socket buffers (see below). 2.4.20 and later kernels do this automatically for the NFS client’s socket buffers. See Section 5.3 of the NFS How-To for more information.
If your rsize or wsize is very large, reduce it. This will reduce the load on your client’s and server’s output socket buffers.
Reduce network congestion by ensuring your GbE links use full flow control, that your switch and router ports use adequate buffer sizes, and that all links are negotiating their fastest settings.

default mount option is proto=udp, so I added proto=tcp to the mount options of the nfs share to /etc/fstab:

host:/share /mountpoint nfs proto=tcp,user=USERNAME 0

solved the problem for me completely, with no loss in transfer speed

ubuntu 16.04 and z270 mainboard with killer ethernet 2500 adapter

unfortunately the killer ethernet adapter 2500 is supported under linux kernel 4.8 +

ubuntu 16.04 comes with 4.4 kernel

4.8 kernel is standard in ubuntu 16.10

so I decided to upgrade

vi /etc/update-manager/release-upgrades

and change







if you decide to compile the alx module for your current kernel, follow this tutorial from the killernetworking website

mediawiki 1.27 – no edit buttons after upgrade

Update from mediawiki 1.16.2 to 1.27

edit bar was present,no images
clicking the empty spaces did work

inspect element -> the background-image css attribute was not showing

Common.css is not present in mediawiki 1.27

ended up with this working solution for all themes:

http://yourwiki/index.php/MediaWiki:Common.css -> edit

/* CSS placed here will be applied to all skins */

 #mw-editbutton-bold {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_bold.png);

  #mw-editbutton-italic {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_italic.png);

  #mw-editbutton-link {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_link.png);

  #mw-editbutton-extlink {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_extlink.png);

  #mw-editbutton-headline {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_headline.png);

  #mw-editbutton-image {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_image.png);

  #mw-editbutton-media {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_media.png);

  #mw-editbutton-nowiki {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_nowiki.png);

  #mw-editbutton-signature {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_signature.png);

  #mw-editbutton-hr {
        background-image: url(/resources/src/mediawiki.toolbar/images/en/button_hr.png);

save, reload page

this css code will be loaded every time on every page, not optimal of course

let me know if there is a better solution for this problem

AIX process coredumps, useful commands

trigger coredump, this kills the process obviously

kill -ABRT <PID>

change coredump directory

syscorepath -p /path

this change is persistent

enable full coredump

chdev -l sys0 -a fullcore=true 

file can be as big as the process in memory
rss colum in

ps waux

but not bigger than the coredump ulimit

to check:

ulimit -c

to change coredump ulimit permanently, change /etc/security/limits

       core = -1

for unlimited

brute force protection, fail2ban owncloud integration

the whole integration is very version dependent

release data:

  • OS: CentOS release 6.7
  • fail2ban: 0.9.3-1.el6.1 from the epel repository
  • owncloud
  • owncloud

fail2ban- from official repo had issues with multi ip ban

I don’t recommend a direct fail2ban upgrade, delete the official rpm, move the whole /etc/fail2ban folder to a backup location, and then reinstall the new rpm
iptables-multiport did not work for me after an inplace upgrade



owncloud configuration



add the lines

  'loglevel' => 1,
  'log_type' => 'syslog' ,
  'log_authfailip' => true,

default syslog file is /var/log/messages



fail2ban configuration


failregex= .*ownCloud.*Remote IP: \'<HOST>[')]
ignoreregex =

this is the most important config file, it defines the filter rule, HOST + brackets represent the IP address used later on for the iptables ban
you need to check your rules after every owncloud upgrade and maybe adapt the regex



here are two sample lines:


Dec 15 07:54:20 host ownCloud[5920]: {core} Login failed: 'user' (Remote IP: '


Dec 14 21:56:25 host ownCloud[10739]: {core} Login failed: 'user' (Remote IP: '')

as you can see this most likely was a bug in owncloud, the regex [‘)] at the end matches ‘ and ) to work with both versions





enabled = true
filter  = owncloud
maxretry = 5
bantime  = 7200
logpath = /var/log/messages
action   = iptables-multiport[name=owncloud, port="443,80", protocol=tcp]

so after 5 failed login attempts, the ports 80 and 443 are blocked for the given ip
use the action iptables-allports if you want to block all ports


some nice commands to work with I came across:


unban IP from fail2ban:

fail2ban-client set owncloud unbanip

this does not work in older versions of fail2ban


check regex from fail2ban filter files:

fail2ban-regex /var/log/messages /etc/fail2ban/filter.d/owncloud.conf




credits to this website, it’s for older versions of fail2ban and owncloud, but at least some things worked for me

Update issues fedora 21 to fedora 22 with fedup

fedup --network 22

you need to specify a proxy server in /etc/dnf/dnf.conf since fedora 22 completely switched to dnf

error message occured:

Using metadata from Mon Nov 30 09:43:54 2015 (0:03:14 hours old)
Error: cannot install both mozjs17-17.0.0-12.fc22.x86_64 and mozjs17-17.0.0-12.fc21.x86_64


only way to ged rid of it was

rpm -e --nodeps mozjs17-17.0.0-12.fc21.x86_64

the updater installs mozjs17-17.0.0-12.fc22.x86_64 anyways


after the reboot the update went fine, but I was not able to use the mouse cursor or keyboard in gdm

the problem seems to come from the nouveu drivers, so I decided to install the proprietary nvidia driver from their website

you need to disable nouveau module load:


vi /etc/modprobe.d/blacklist.conf

blacklist nouveau


the nvidia installer asks to create a file himself to prevent the module from loading, but not sure why but it didn’t work, that’s the file:

# cat /etc/modprobe.d/nvidia-installer-disable-nouveau.conf
# generated by nvidia-installer
blacklist nouveau
options nouveau modeset=0


then install the NVIDIA drivers, switch to runlevel 3 before you do it




and follow the instructions

convert DVD with all subtitles and audio tracks to mkv on fedora 21

first install handbrake

for compilation from source see my tutorial

run ghb, /usr/local/bin is the default binary folder

klick “Source”, select the DVD drive/Video TS or a local Folder

for Subtitles, select “Subtitle” tab, and press “Add All”, or “Add” and choose your subtitle
for Audio tracks select “Audio List”, press “Add All”, or “Add” and choose the audio tracks you want

then select between mkv and mpg4, press “Start”

that’s it

default output folder is your home directory

if your dvd is write protected you need to install libdvdcss, can be downloaded from

install handbrake from source on fedora 21

found no repository for fedora 21, so I compiled it from source

download the source code from
after I figured out every dependency, and it has a lot of them, I found a nice site that has listed all of them

sudo yum install yasm zlib-devel bzip2-devel libogg-devel libtheora-devel \
libvorbis-devel libsamplerate-devel libxml2-devel fribidi-devel \
freetype-devel fontconfig-devel libass-devel dbus-glib-devel \
libgudev1-devel webkitgtk-devel libnotify-devel \
gstreamer-devel gstreamer-plugins-base-devel

for handbrake gui support

sudo yum groupinstall "Development Tools" "Development Libraries" \
"X Software Development" "GNOME Software Development"
tar -xvzf HandBrake-0.10.2.tar.bz2
cd HandBrake-0.10.2
cd build
make install

it should install to /usr/local/bin per default

Command line client:


Connectivity check for uri ‘’ failed with ‘Could not connect: Network is unreachable’.

Fedora 21 NetworkManager-

Network manager checks for a working internet connection and tries to get
behind a proxy, this check does not work, so it needs to be disabled in order to get rid of the syslog error messages

from the man page of NetworkManager.conf

This section controls NetworkManager’s optional connectivity checking functionality. This allows NetworkManager to detect whether or not the system can actually access the internet or whether it
is behind a captive portal.

The URI of a web page to periodically request when connectivity is being checked. This page should return the header “X-NetworkManager-Status” with a value of “online”. Alternatively, it’s
body content should be set to “NetworkManager is online”. The body content check can be controlled by the response option. If this option is blank or missing, connectivity checking is

Specified in seconds; controls how often connectivity is checked when a network connection exists. If set to 0 connectivity checking is disabled. If missing, the default is 300 seconds.

If set controls what body content NetworkManager checks for when requesting the URI for connectivity checking. If missing, defaults to “NetworkManager is online”

does not look like it is possible to specify a proxy server, so the interval needs to be set to 0

systemctl restart NetworkManager.service