[NBLUG/talk] FHS and webapps
Kendall Shaw
kshaw at kendallshaw.com
Wed Nov 20 13:37:55 PST 2013
I think I will use /usr/local/app/name for now.
On 11/20/2013 12:11 PM, Kendall Shaw wrote:
> See below:
>
> On 11/20/2013 09:42 AM, Kyle Rankin wrote:
>> On Sun, Nov 17, 2013 at 07:31:18PM -0800, Kendall Shaw wrote:
>>> The filesystem hierarchy standard seems to say that files for a
>>> website should be placed in /srv. Serving images for PXE from /srv
>>> seems to make sense to me.
>>> But... HTTP serves resources which are not files any more than an
>>> executable is a file. And there isn't only one network program on a
>>> computer serving HTTP, usually.
>> It's UNIX, everything is a file! That said, older installs used to
>> put PXE
>> files under /tftpboot, but these days Debian-based distributions at
>> least
>> tend to favor /var/lib/tftpboot. More below...
>>
>>> So, of course, you can put webapps whereever you want. But, for the
>>> sake of knowing where a distribution or a vendor should not put
>>> files, where do you think a locally developed webapp's component
>>> files should be located?
>>>
>>> I think it would not be these places:
>>>
>>> The dump:
>>> /usr/local
>>>
>>> Distro files:
>>> /usr
>>>
>>> Vendor files:
>>> /opt
>>>
>>> Files for file transfer protocols:
>>> /srv
>>>
>>> Transient files:
>>> /var
>>>
>>> Kendall
>>>
>>> --
>>> Dead peer detection detects dead peer.
>>>
>> First, I'm just happy to see people are still trying to make careful
>> consideration about the right place to put files, too many sysadmin and
>> developers these days have no idea what the FHS is, why the existing
>> directories are there and end up throwing things in a home directory or
>> worse, creating their own special root-level directory.
>>
>> What I try to do, in general, when making decisions like this,
>> specifically
>> with Debian-based distributions like Debian and Ubuntu, is to pick an
>> existing common package that achieves something similar and see what
>> they
>> do. Debian folks, particularly for widely-used packages, tend to be
>> fairly
>> picky about where files get installed so there's a good chance they will
>> pick a good location. But even more important, to me, is consistency.
>> Even
>> though I'm personally not a fan of /opt, if my organization
>> standardized on
>> it that's what I'd use. I'd rather things be wrong, but consistent, than
>> inconsistently right.
>
> I don't want to put files where they will conflict with the distro's
> package management system though. There are at least these distinct
> classes of software package:
>
> Distribution package managed files, e.g. X11
> Packages managed by other package managers, e.g. CTAN, npm, pypi
> Packages that are unmanaged but come from a provider that might
> provide more than one package where it is meaninful to have them
> grouped by provider, e.g. Franz
> Independent unmanaged packages, e.g. gradle
> Downloaded unstructured packages that expect to be dumped in a big
> pile.... xv maybe.
> Locally developed packages
> Transient files, e.g. mail spool
> Per user packages, categorized somewhat like above.
>
> Categorized by actor:
>
> distribution, e.g. Yggdrasil provided packages
> installation, e.g. department wide packages
> administrator
> user
>
> I think it would be useful to keep those separate in a way that is
> standard across unices.
>
> /opt always reminds me of Solaris packages. Maybe that is where it
> came from. Back then (sparc era) there was a big difference between
> the structure of /opt and /usr/local in that each directory under opt
> was an individual package, where as /usr/local contained files from
> different packages mixed together. No /usr/local/share, for example.
> So, trying to figure out what files belong to a program was confusing.
>
> Since there is still old software around, it seems to me like there
> should be a place for software that is going to insist on being mixed
> together. FHS calling /opt the place for vendor and registered
> provider files fits a distinct need, I think.
>
>> More to your question, it really depends when you are talking about web
>> files if you are talking about executables (like .php or ruby or java
>> files) or if you are talking about static files like images, user
>> uploads,
>> etc. The beauty of modern package management is that you don't have
>> to dump
>> everything into a single directory. For instance, if you look at the
>> Debian
>> wordpress install, all of the .php files are dumped under
>> /usr/share/wordpress because they are treated like executables but the
>> Debian convention is to put web server docroots under /var/www. Based on
>> that, if I were on a Debian system I would put my custom executable
>> web app
>> files under /usr/local/share/appname but I might put the main docroot
>> under
>> /var/www/ particularly if I'm going to accept user uploads as you don't
>> want /usr to grow rapidly.
>>
>> Basically, pick a well-known package in your distribution that
>> achieves a
>> similar goal and see where it puts files, and then mimic it with the
>> only
>> change of putting anything in /usr under /usr/local instead.
>>
>
> I don't see a reason to distinguish between HTML files and PHP or
> binaries though. They are all programs. If HTML is not a program, what
> is Prolog? There is a reason to distinguish between named resources
> served by services and named resources that are conceptually files,
> like things transferred by a file transfer protocol.
>
> Having a separate place for transient files seems good to me and so
> having /var for that makes sense to me.
>
> Kendall
>
--
Sorry, you must accept the license.
More information about the talk
mailing list