We help IT Professionals succeed at work.

Samba share to Windows from CentOS falls over with 'permission denied' for some functions.

High Priority
85 Views
Last Modified: 2020-01-28
I am running a CentOS 7 server running Samba 4.9.1

I have a fileshare on a Win 2008 R2 (soon to be upgraded) server that I want to right to from the CentOS sever.

I have installed 'samba-clent' and 'cifs-utils'. I have added a line to [/etc/fstab] to create a mount point to a folder in the root called 'output' (i.e.  '/output') and passed credentials of a special windows user from a text file (username, password and domain).

On the Windows side I have granted the share folder 'Full control' to the windows user AND shared the folder with them.

This all works well and the two servers are now linked so that if I create a text file from CentOS in the folder '/output' it appears in the Windows share. I can list the share's contents. create folders, delete files and delete folders.

HOWEVER I when I run a shell script that runs a docker program (third-party that I can't upload) it returns 'Permission denied' when it tries to generate a database backup in that location.

The exact same setup worked under Ubuntu 16.04 so I am confused as to what is missing here.

I have even run '# semanage fcontext -a -t samba_share_t '/output(/.*)?' and then '# restorcon -v /output' to stop SELinux from blocking the Samba communication.

Questions:
  1. What could I be missing ?
  2. Do I need to create a Samba user just to access this share ? I didn't under Ubuntu 16.04
  3. How can I further manually test the share to see if I am missing permissions
Comment
Watch Question

Author

Commented:
The script is creating a TGZ archive into the '/output' folder
CERTIFIED EXPERT

Commented:
This looks like an issue regarding the docker container not being able to access the required location. Either the docker had required parameters to make the location available within the container, or some new cgroup isolatioms were added, possibly isolating the network stack.

It would probably be easier to debug of you can fire a shell within the container to run diagnosis

Author

Commented:
I am not that experienced in docker (don't use it for development - just run these programs) so am now sure what you are suggesting. These are third party containers and scripts so I have no idea what the container is doing and what environment settings it needs.

I can say that when I manually run the script that fires off the docker I have the same failure yet when I try and create a 'output.gz.tar' archive in the '/output' share that works perfectly.

I have a feeling is must be a miss configuration of Samba / SELinux under CentOS because the exact same config and docker scripts / program worked perfectly under Ubuntu (i.e. SELinux / CentOS adding a layer of complexity).
CERTIFIED EXPERT

Commented:
This will proove difficult to debug without either more info regarding the container, or hands on the device.

One thing that might help is to run the script through strace shich will likely provide a more explicit error.

It is also fairly possible the default docker isolation level is more strict. Namely separate mounts, separate network stacks might get in the way

Author

Commented:
But if the docker software on Ubuntu was the same and the Samba setup was the same the only change is CentOS and SELinux - is it not ?

If I run the docker script and give it a local (on CentOS server) folder there is no issue but if I direct it to the share then it falls over.

I think I need guidance on how Samba config is different under CentOS than Ubuntu (as far as I can see that is just SELinix)
CERTIFIED EXPERT

Commented:
Samba is the same. And not the issue if the share does work on the host. Docker isolation is different across versions and oses.

The issue you face is most likely one of the sbove mentioned.

Author

Commented:
Samba is the same
I have read alot about how SELinux interacts / interferes with Samba, so that statement suprises me.

What I think I really need is:
  1. a good clear guide on how to get Samba working with Windows
  2. a good clear guide on any SELinux adjustments needed to get Samba working
  3. Answers to questions like; if I am writing to a share do I need to create a Samba user (never needed in the past), do I need to change 'samba.conf

I have attached the 'strace' if you can extract anything from that ...

Thanks
strace-result---U.txt
CERTIFIED EXPERT

Commented:
The strzce shows an issue regarding overriding a backup dir contents. Have a look at the last lines.

Selinux issues csn be checked by disabling selinux alltogether and test again.

Personnal comment : selinux is mostly a useless and hard to debug mess. I personally hardly ever run it.

Author

Commented:
I will look at disabling and testing SELinix tomorrow

I have uncovered the following interesting points:
  1. The share I am currently targeting from CentIOS is on WinServer2
  2. Previously I had the same software and script on Ubuntu and that pointed to WinServer1
  3. I have just changed Ubuntu to work against WinServer2 and re-tested - it failed
  4. I changed Ubuntu back to work against WinServer1 and it worked

So that would suggest that there is some difference in the Windows shares.

I adjusted the CentOS box to point at WinSevrer1 and that failed. So it looks like there may be issues on both ends !
CERTIFIED EXPERT

Commented:
Look into the issues i mentionned. If the share works on the host os, there is little chance that is thr issue.

Author

Commented:
OK so this morning I disabled SELinux (and of course restarted) and re-tested. Unfortunately the docker Mongo backup still fails.

Looking at the only output, the 'Backup.log', the only entry says as follows:
Failed: error connecting to db server
I can't work out how my wrapper scripts which creates text files and copies other files to the CentOS share works fine, in that the copied and created files appaer on the WinServer2 box, but the docker command keeps failing.

As I say, exactly, the same script was used from an Ubuntu box and that worked fine. There must be some additional layer / complication that CentOS adds. I had hoped that was SELinux but apparently not.
CERTIFIED EXPERT

Commented:
as expected, se linux is out of the picture.

... which leave 2 things to look into.

what about the error in the strace ? does the dir actually exist ? in the expected location ?
write(1, "Directory /pawdemo-backups/eta a"..., 82Directory /pawdemo-backups/eta already exists, we will not overwrite its contents
) = 82

AFTER checking the above, and making sure the mount works locally, look into the container isolation levels on both setups. post the results if this brings nothing obvious to you.

Author

Commented:
The 'eta' folder the strace is referring to must be a temporary folder that is created during the docker backup commands. That folder is not present in the output of a successful run and it is not in the backup script being executed.

Of course '/pawdemo-backups' exists

As I have said before I am not a Docker developer, I only know enough to use Docker containers, restart and monitor them. I have no idea how to 'check container isolation levels'.

The question is simply: why would the script work when pointed at '/pawdemo-backups' when NOT mounted but fail when that path is mounted to a samba share when all other read/write tests work ?

I have avoided saying these so as not to confuse the situation but I have written a shell script that, in addition to running the third party docker script, also:
  • creates a folder in the mounted directory
  • copies additional files from the docker install to the mounted directory

Now all my file / directory commands work perfectly and even a failed run (i.e. of the third party script) will still produce the folders and file copies that I myself have scripted.
CERTIFIED EXPERT

Commented:
the answer is simply : docker runs in isolated containers. the isolation produces changes you clearly are not aware of : the user id may be different within and outside of the container, the mount points may not be available, the network stack may be isolated. and that is obviously assuming you got the right path.

start by checking the mount point actually exists INSIDE the docker container ( mount isolation ), then check it is indeed available INSIDE the container ( network isolation ), then check the user INSIDE the container is alowed to write to the mount point location.

given the strace results and the fact the dir allegedly does not exist, it seems likely the script is not actually writing to the location you think. maybe because the mountpoint is not present in the docker container.
CERTIFIED EXPERT

Commented:
if you can script, can't you simply generate that tarball locally, and copy it to the nas after it is generated ?

Author

Commented:
That is the only option I have come up with so far (writing the script as I post)

But since I will need to replicate this backup script (the one I wrote to automate the third party) backup, that had been working for months and month without error under Ubuntu, I need to find out what the problem is.

Also a 'generate locally and copy to share' script in by definition more complicated and has more chance of generating errors (i.e. if not all files are created or some files are not deleted leading to duplicate or missing files on share) - so I would prefer to get the previous simple and reliable script working
CERTIFIED EXPERT

Commented:
already told you what is likely the issue. can't help more unless you actually look into it.

Author

Commented:

I went with the idea of mounting another drive and using a cron job to copy from the mounted folder to '/var/www/html'. This appears to work fine.

The solution was to mount the share to another Linux folder and copy at regular intervals. This has not identified the cause but resolved the issue.

CERTIFIED EXPERT

Commented:
i did identify the cause : the mount point is not available from within your container. this issue is documented all over the internet and frequent with docker setups. you need --driver=local and a bind mount or pass nfs mount information when running the docker container. and you need to make sure the user id inside the container has required privileges.

anyway, good to see you managed to circumvent.