Speaking strictly for myself, I dislike the idea of having Gallery3 an Apache-only application — in the sense that it relies on an external
system (Apache rewriting) to enforce internal
permissions. This made sense a decade or two ago when there wasn't any decent way of doing things differently. It also still makes sense for those still using Apache with mod_php as a monolithic system to run a whole array of PHP-based applications on a server with enough resources.
But Apache is not the 'best' tool for all situations. New generations of applications run their own, internal, minimalistic web server, and expose it conveniently through a port. While you can certainly use Apache as a front-end for those applications, it's really overkill, when there are plenty of alternatives around to provide a modern, secure and efficient front-end web server to relay things back to an application running in the backend. Nginx is just perhaps one of the oldest of such solutions, and possibly the most battle-tested by far, but it's not the only one. Indeed, Nginx, since its very beginning, has never supported PHP 'natively' — PHP has always to be run as a backend service (through PHP-FPM or a similar solution). The advantage, of course, is that you can cleanly separate front-end HTTP-related issues from backend considerations (such as pretty URLs and/or permissions...). Ideally, those things should be completely separate and not 'integrated' as if Gallery (or any other application) is just a strange extension or addition to Apache, as opposed to being a standalone application.
Gallery3 runs on top of the now defunct and obsolete Kohana Framework
. While arguably there is at least a valid successor fork (Koseven
) which seems to continue to be maintained (the last update was just a week ago, which is a really good sign), and eventually Gallery3 might
be moved from Kohana to Koseven and thus guarantee some degree of future-proof-ness, I think that one ought to leverage on the framework to do whatever internal
routing is necessary, as well as placing all safeguards in place, and not
rely on 'external sources' for that. Granted, I'm not really used to Kohana — nor to any popular PHP framework, to be honest — so I cannot claim to know how easy it is to work with that framework in particular. But for those who are serious about continuing to develop Gallery3, I would seriously encourage you guys to work strictly within the constraints of the framework. The whole concept of having a PHP
application tinkering directly with .htaccess
files makes me cringe — that's how a company I worked for did things in 1998
, and lived long to regret such decisions. It could be worse, of course — it could be happily writing Apache vhost configuration files instead, or, why not, directly accessing /etc/passwd to create and remove users (aye, that's one of the many ways we did things back in 1998 in a hurry...).
Similarly, while I understand the concept of storing images in quasi-hidden, obscure directories, with complex access paths — and I can even agree that those might make sense in a scenario where users are not supposed to view certain images, such as in a system behind a paywall — it's also true that something will go wrong at some point when using that concept. Files get lost — the database loses track of where certain files are supposed to be — things get out of sync, and so forth. You need to deploy maintenance tasks to figure out where the 'missing files' have gone, and try to update the database accordingly; similarly, the database must be pruned once in a while, in the case that it got corrupted, was restored due to a database server error, and now contains pointers to images stored on disk that don't exist anymore — or someone made a mess of the image folders, uploaded things directly to them, and 'forgot' to tell the database about the changes...
Not even mentioning that you have to forfeit a lot of optimisations — the kind that didn't exist in 1997: CDNs, different ways of compressing images on disk, different levels of caching (on-premises, on the cloud...), direct static access to images bypassing the actual engine running the application, load balancing, and so forth. In a sense, many programmers are so deeply in love with the abstract, virtual environment provided by their favourite language or framework that they forget that, these days, contemporary operating systems have incredibly efficient filesystems — it's stupid to 'reinvent the wheel' in hope of being able to 'beat' the filesystem and develop something 'better'. This might have been the case a few decades ago, when disks weren't so fast and many filesystems were not necessarily optimised as they are today. I'm actually quite amazed that, these days, there are a few programmers out there who have figured out that storing a whole CMS on disk and not on a database gives blindingly fast performance; just because modern CMS are supposed to be 'dynamic' and not 'static', that has little relevance to how things are actually stored. In other words: don't 'fight' the filesystem by trying to be more clever than it. The filesystem is your friend, your ally — not something which has to be crushed or at least ignored. Leveraging on the filesystem itself means that front-end applications such as nginx, Varnish, or even CloudFlare (to mention three different levels) can efficiently cache content all the way — to the very browser itself. Remember, a faster algorithm always beats a faster machine — throwing hardware at a problem in order to 'fix' it just means that it has been sloppily programmed. One thing that I've always loved about Gallery is that it is — and always has been! — very fast, even on old, underpowered systems. I'm assuming that when Gallery was first developed, and Kohana became the choice of framework, the original developers didn't have racks and racks of servers to deploy their application, so they just had to use whatever underpowered device they could grab to do a few tests.
Last but not least... I'm well aware of the current trend (started by Google!) of distributing everything as a Docker container (or, well, some other sort of popular container). The theory is that 'things are too difficult to configure anyway' so developers will carefully pick a working combination of software layers and just ship those as an opaque container. That's all good — and most certainly has lots of uses! — but it has two terrible disadvantages.
The first, of course, is performance. Docker containers and similar technology is appropriate for cloud-based computing, which is virtual anyway, and 'performance' is just a question of moving a slider to get more CPU, RAM, disk space, and number of instances — and get charged accordingly. If everything is properly inside a container, it makes all the work of distributing software around a true blessing — no need to hunt down for 'rogue' instances still running the 'wrong' software stack and throwing errors. But if you're paying for just a single server to host your own content, containers are just another virtualization layer on top of the 'bare metal' running beneath, sucking up precious resources. One may argue that, in practice, container technology is so advanced in 2021, and gets so much help even from CPU instructions specialised for virtualization, that this overhead is negligible. This might be true in some scenarios, but, from my experience — and not from looking at someone else's benchmarks! — this is hardly the case. I run Docker once in a while on my Mac, when I really need to launch some sort of application that is not 'easy' to install (for instance, because some elements of the required software stack are not readily available, and it's far too much work to simply compile them all from sources), especially when it's something I'll use seldom or that I'm just evaluating. Sure, the time saved in skipping all the requirements and running the container without worries more than compensates the eventual lack of performance (hint: when running Docker on my Mac, it's pointless to try to run anything else — especially during the actual container launch/activation. Sure enough, when the application(s) inside are pretty much idle, Docker will not really hurt performance; then again, if I'm launching something inside a container, it's not because I want it idle all the time, but rather because I want to use it...).
The second disadvantage is a far more complex one. By making a container opaque — i.e. a 'black box' which 'just works', no assembly required — the end-user has no clue if the actual software choices and configuration details are optimised for their particular hardware or use-case. It also becomes much harder to figure out if all components inside the container are up-to-date and have no serious security issues — especially the kind of issues that are not present when running the container in complete isolation, in a lab, but which become apparent as soon as that specific container is dropped inside the complexity of a server probably running a lot of different systems (again, this is not the recommended way of configuring cloud-based solutions, and that's why containers are so great for those — in theory at leasy, each computing instance ought just to run one container with one application/service inside; if you need to run anything else, place it on different instances, and make sure they don't conflict with each other; if they do, well, you can isolate the rogue instances easily enough).
Imagine the following scenario (which is almost not
hypothetical but very close to reality): on my admittedly underpowered server, MariaDB is basically the
huge resource hog — mostly because I'm at the mercy of WordPress plugin developers who think that logging everything
to the database is a 'good idea'. As a consequence, almost every resource has to be given to MariaDB (CPU and RAM...) while the remaining processes — doing pretty much everything else — have to share a few scraps that remain. Now let's assume that the Gallery Revival developer team decides that MariaDB is not really a 'good enough' replacement for MySQL, because there are one or two quirky (but much needed!) features
which won't work under MariaDB (which I don't need to remind you that has been developed well after Gallery was first released...). Since the work to refactor the code to take those 'quirks' in account — or rather to avoid
them in order to support more databases which are compatible with the MySQL wire protocol (such as, say, Amazon Aurora
, Google Cloud SQL for MySQL
, or even a DIY-MySQL-compatible-RDBMS
) — might be too much effort, the developers decide to bundle MySQL itself within the same Docker container. Now my poor underpowered server not only has to deal with two different web servers but with two different database systems — not to mention the extra overhead from 'dockerizing' this container... all that for running a software which is
showing its age and could
be replaced by placing every image/movie on Flickr (or anything else) and forget all about running one own's server...
In conclusion: I love when developers in the community add an optional Docker container for download. This is because in many cases, the majority of support requests/questions are not connected directly to the application itself, but rather to the environment it's running on. Since people can have a vast variety of quite different environments, trying to answer all possible environment-related questions may be close to impossible — which can also be perceived by potential future Gallery users as a software which might be useful, but which has a worthless community of developers and supporters. Open-source software that has a friendly, engaging community, which is always ready to step in and solve issues, is seen generally as a more interesting alternative (as opposed to single-developer solutions where the developer never answers any of the requests). This trade-off is often easily solved by using some sort of virtualization technology and tell potential users to simply install the container and start adding images and videos to it — thus 'proving' to them that the only problem they have is with their environment, the application itself is working nicely.
Thus, I certainly encourage and welcome an always-up-to-date, but optional, Docker container for Gallery Revival.
However, because Docker containers (or any other virtualization technology) have their drawbacks — for me, the most important one is knowing that just one 'magical' environment is the 'correct' one for running that specific application, which means that something must be wrong with it — getting the application to run outside of the container should always be an option — even if it might be something hard to do for the user. In other words: the priority should be to install the PHP source on one's system and it should work under most scenarios; if it doesn't work, then you can always use the Docker container (while eventually figuring out what is so different in your environment that the application doesn't run).
Anyway, please apologise my ranting. It's just that I have gone through similar scenarios so often (even WordPress is 'guilty' of the same mistakes, especially in a multisite environment...) that I'm well aware of the nightmare it is when things go wrong. I have experimented with some applications that have 'decided' to only support running inside a specific container technology, exactly because they were tired of answering way too many questions about the running environments. I remember one case where the developers even offered the original source code, for those — like me — who were masochistic enough to run the code outside of a container of some sort. Not only it was a nightmare to set up — the insane amount of frameworks it required just for basic functionality was mind-boggling — but, at the end, it did not work (or at least most of the functionality wasn't working), basically because the application was so badly written, with horribly-patched legacy code, and a development methodology which was stubbornly 'reinventing a lot of wheels' without any practical purpose except making the installation tasks even harder... and 'forcing' potential users to give up and just run the container version instead.
Admittedly, I'm possibly over-reacting here; but when I first started to use Gallery — and still do! — was because it was a simple, easy-to-install solution, taking few resources, and was super-fast. When considering alternatives — such as Flickr! — I tend to evaluate them compared to Gallery: what else do they offer that Gallery doesn't, and what kind of resources will I need to run such alternatives? (Needless to say that this makes me reject anything running on Java or Ruby on Rails...) So far — and I admit I haven't been 'shopping for alternatives' lately — Gallery, even with its many shortcomings (such as an integration plugin with WordPress which is now hopelessly outdated and requires a lot of effort to get it to work), beat the alternatives. But from what I've read on this thread, things may start to become intolerably complicated in the very near future, and that's Not A Good Thing.
the development guidelines for Gallery are departing too much from what made it so attractive 15 years ago or so, I guess it's time for me to move on to Coppermine
— which I did
use extensively a decade ago for a 'special project' for a customer requiring access to a photo gallery that had some form of API. At that time, Coppermine was much more suited to the task than Gallery, so I had to admit that I had to use Coppermine instead. Coppermine, BTW, is still being actively developed. It allows fancy things such as single-sign on by sharing login data with a dozen or so applications; it's easier to tweak/change the theme or develop one's own theme; and, as said, has (had?) a more documented API for better integration. Why I'm not using it instead of Gallery?... well, there are many reasons, but I'd say that 'bad code' would be at the top: while I'm not a huge
fan of 'frameworks', Gallery has the advantage of being just an application running on top of Kohana, and Kohana, by itself, provides pretty much everything needed to develop a full application — one that runs very fast
thanks to the many levels of caching and very clever code optimisation. Coppermine does 'everything from scratch' and not necessarily in the best way possible; some programming decisions, made very early on, affect the whole development process, and it's hard to change the overall conceptual architecture, since Coppermine depends on it. Gallery — in theory — changes easily if Kohana is changed; granted, the lack of further development of Kohana is
a real problem, but, as said at the very beginning, it might
be relatively easy to 'port' it to Koseven or whatever Kohana fork is chosen, and enjoy a continued development at the core level at least, leaving the community of Gallery Revival developers free to focus on the essential features they wish to implement on the application itself...
Anyway... enough from me. I know; I talk too much. But I only do so with things that I'm truly passionate about; I have been a huge fan of Gallery, even when there was no 'Revival' in the horizon. Now, of course, I'm even more passionate, knowing that, at least, there is the will to pursue this project further!
Thanks again for all your volunteer work to keep this project alive, and please disregard an old loudmouth.