./configure examples for MySQL5 Apache2 PHP5 SSL1

Posted on 2006-04-07
Last Modified: 2012-06-27
Who has some configure scripts handy as examples for compiling these to all work together on a Linux machine?

Please post the configure scripts or provide a link.

Funny how none of the developers supply something like configure.example

Question by:GinEric
    LVL 24

    Expert Comment

    very funny, just look for any book about LAMP and you got it all together. Now why should an apache developer care how apache might or migh not work together with

    SSL1 is to outdated that one can not describe it. So nobody is using it if he/she is not forced to do it.

    Now every package comes along with documentation on how to use it and especially in the PHP books you find everything you need to set up Mysql PHP and Apache. So why not start there?

    On the other hand if I run some apt-get install here I get all this packages installed and guess what they even work.

    Funny what demands you have without having shown the slighest hint that you have started, getting things installed.

    If you can not do it yourself you should be prepared that you pay someone to do it for you. But of course that goes not hand in hand with free (beer)

    LVL 12

    Author Comment

    Thanks Fred, that really gave me a laugh too!

    I read your profile; so, you like old programs eh?  Guess what, I've been programming since before you started studying those old programs and many of them have my design and handiwork in them, not to mention the hardware design of your current computer, which also has my handiwork and design in it.

    What made you think I hadn't started on any of this?  Why do you think SSL is outdated?  Not as far as I know it isn't.  How about LFS, ever heard of it?  I didn't ask anyone to do it for me, I just wanted to see some examples.  Your company could not afford my salary, believe it, and I don't know that anyone can do it but me, many have tried and failed.

    Here are just some of the scripts I've written:

    unlike other developers, we do provide working examples.  Read the line where it says :

    "In my case this would come out to be:


    And =OPTIONS.c would contain all of the configure script options, perhaps including the --with-layout=LAYOUT options. In such case, it would only be necessary for the developer to change a single <$OPTIONNAME>[=ARG] to :

    --with-most=OPTIONS.c "

    I'm way ahead of the simpletonness of apt-get and working on a New configure, automake, in fact all autotools, including libtool, that will boil all scripts down to a single element to change to make any program compile on any operating system basically out of the box.

    Besides, I know the parent languages of all C and others, Espol and Algol, oldest but the most adanced Operating System and Compiler there is; everything else wants to be Espol, Cobol wanted to be Espol for 40 years, still ain't there, and Dennis Ritchie used Algol to create C.  I've even worked on the compilers and operating system software, while designing the hardware and making it work too.

    All I asked for here were some example scripts, hoping to see what other distributions are using as scripts to compile so I can incorporate their layouts into our new software and test it.

    So, do you have any examples?

    LVL 24

    Expert Comment

    I have not said SSL is outdate I have said SSL1 is outdate, and that's for sure.

    the whole initial message has looked like what I found more and more.
    - give it to me
    - immedeatly
    - don't bother me with trying myself
    - do it for me now and for free.

    I would appreciate if you would have pointed out a bit more context and the like. So I was just thinking what an a....

    so what do you really want, where are you problems specifically?

    Asking on how to get up LAMP is not especially challenging, if you are used to this.

    LVL 12

    Author Comment

    Sorry, SSL1 is just my version for current SSL in my own nomenclature.

    The context is only that I'd like to see some other examples to compare to my own.

    I've worked with Lamp and Xamp or whatever its name is now; it was too restrictive.

    Objective:  look at other configure scripts examples
    Problems:  Many, I am trying to see what other compilers have done to get around minor dependencies that cause major compile or working failures.

    Chain of events:  Try to compile something, anything, say Linux From Scratch or even SSL - result: minor dependency failure with lefence

    no mention of lefence in docs.  "electric-fence_2.1.13-0.1.tar.gz" no config file, compile - dead end until I create a config file for an undocumented dependency.

    Numerous compiles later, standstill and back where I started.

    Why?  One missing file and that in a less than needed dependency.

    If it can happen with little SSL, imagine what will happen with MySQL, PHP, and Apache all cross compiled together.

    It could take years to compile them together.  In fact, it has taken more than a year already.  The only thing that consistenly compiles without error is Apache; all of the other fail similar to SSL, dependencies that lack both documentation and portability.

    So, I need to see how others have done it in the past.  I can't find the method for SlackWare 10.1, and believe me, I have looked everywhere.

    I have succeeded with various earlier versions of all, even some Apache2 with MySQL5 and PHP5, but then they don't work on the httpd server.  On Windows:

    as well as a nearly the same Linux:

    Working on a new built server, I want it up to MySQL5 Apache2 PHP5 with SSL so that I don't have to replace the whole thing when the new tarball comes out, which may, in fact, only come out if I finish it as a package, or subcomponents, here.

    The configure command is quite extensive for the last version stated, and is about the only good example, from Apache naturally; I don't have relative ones for MySQL and PHP to use as a guide, as I do with the Apache info.php command.  I don't know why php did not include both its configure and that of MySQL, but they are simply not there.  The changes from Apache 1 were radical, elimination of a lot of mods and replacement of the LoadModule for the old
    --with-mod_php --with-mod_mysql and so on.  I know what the LoadModule commands are, but can't get a decent configure script accounting for the necessary cross compile conformant to the Apache extensive module inclusions in its build.

    And that is why I asked for some examples, to work it all out so that it does work and a new package can be built.
    LVL 24

    Accepted Solution

    I just pick up one poin openssl compilation. There's not dependency on libefence if you do not ask for it.

    I did the following on my box
    sudo apt-get -b libssl0.9.7

    now I have my .deb file here
    -rw-r--r-- 1 root root 1334398 Apr  9 09:46 libssl0.9.7-dbg_0.9.7g-5_amd64.deb
    -rw-r--r-- 1 root root  739632 Apr  9 09:46 libssl0.9.7_0.9.7g-5_amd64.deb

    And that's all that is needed here to get openssl compiled. I did the same some time ago for Apache and I just fetched PHP from the proper homepage and installed that itself on at least 2 machines without any trouble.

    But I think I know you point. We are developers on Linux and whenever you dare to
    ask question about gcc you will get the big nothing. The anser you'll get is. Hey it's free, read the sources, and the like.

    Yes that stuff is free but understanding it is not. They will tell you things like, if you want to know that buy a contract from RedHat for around 20 000 US-$. Quite expensive for "free" speech.

    That attitude on Linux is very simple:
    - all you do yourself has to be free available for anyone
    - information on it is "enclosed" in the brains of the developers working on it(
    clever idea to make yourself unreplacable)
    - if you want information you better have big bucks to get that information

    We still have managed to implement a C compiler on Linux, but I can assure you it was hard work.

    But back from ranting.

    You see that I had not the slightest trouble on compiling openssl myself.
    I had not trouble doing the same with apache and PHP. So the only thing left for me would be MySQL.  I dare to say I would not have to expect much trouble with that also, but I may be wrong.

    There is no efence dependency in openssl if you do not ask for it:
    this is the only place I found it in all the sources:
    # Our development configs
    "purify",      "purify gcc:-g -DPURIFY -Wall::(unknown)::-lsocket -lnsl::::",
    "debug",      "gcc:-DBN_DEBUG -DREF_CHECK -DCONF_DEBUG -DBN_CTX_DEBUG -DCRYPTO_MDEBUG -DOPENSSL_NO_ASM -ggdb -g2 -Wformat -Wshadow -Wmissing-prototypes -Wmissing-declarations -Werror::(unknown)::-lefence::::",

    Now installing efence is just anotther apt-get away.

    I'd argue this it the greatest gift from the Debian people this package management. I would strongly suggest you use it and see how long you get along with it.

    If you ever feel there should be a "documented" Linux version then feel free to contact us, we'll happily work for you on this ;-)


    LVL 51

    Assisted Solution

    hmm, still confused about what you're asking for.
    Are you telling us that Slackware only compiles apache flawless but not openssl MySQL5 and PHP5
    LVL 12

    Author Comment

    Thanks again Fred.  The main reason I use SlackWare is because it has always been ahead of the others and pretty much sticks to the System V specifications.  The other distributions seem to have enough contributors already.  It did annoy me that when Patrick Volkerding got sick, he pretty much got left behind by the others, who have built distributions based on his, and basically forgotten.  Slackware is the most like a mainframe environment, much more powerful than the often PC oriented other software and distributions.  And flat out, I just prefer it.  I' ve tried the others, they're okay, but lack a lot of features of Slackware and are harder to recover when they crash.  That's just my experience.  Early on, RedHat was all gui on it's install and when it failed the first time I tried it, I still believe I correctly summarized its failure as one owing to too much dependency on gui, like Windows.  It was more for out of the box new people to computers rather than for seasoned professionals.

    Debian, Mandrake, Suse, I'm sure they're all great, but I prefer SlackWare the same way I prefer a big car from the 1950's; they can take a beating much better than plastic personnel mudules of the 21st century.  Even a 1970 Bonneville can take more than a humvee [the cheap $50,000.00 model sold to those not smart enough to ask for non-armour piercing version at over $250,000.00]  I also prefer the stainless steel Delorean to plastic Lotuses too.

    I don't even remember much of the reason why Electric Fence was included, except that there were some crypt and hash routines that performed much better and the security was tighter with them.  Bad malloc's in C, I believe, and I know that feeling of sporadic failures and intermittent problems, as well as the exploits that can be performed upon such weaknesses.  And I want the full debugger capabilities for other programs that are going to inevitably bump and grind and fail along the chain of compilations.

    apt-get and slapt-get are fine for binaries, but that is not what I'm doing.  I'm compiling and testing often very large packages, such as the MAPPS combination, and lately the incorporation of GCC 4.1, by compilation of this very large package.  I'm ready to move up to full 64-bit stuff and can't rely on even slightly dated software.  I also work in hardware design and the arena for problems between hardware and software into the newest systems architecture is predicted to grow as exponentially as the divergence between 32-bit and 64-bit concepts increases.  Simply put, no one is really quite ready for the real 64-bit architecture, from what I've seen thus far, even from Intel, AMD, Citrix, Linux, and Windows.  Most people, programmers and others, don't even fully understand the architectural concepts yet, and most do not know how to effectively optimize and use them.  Thus, an Intel timing buss error shows up as crashes because the speed of the 64-bit system simply was "too fast" for the older drivers, and there were oversights.  Vector Indirect Addressing [which should really be called Matrix Addressing] has confused many software authors, as well as look-ahead logic, pre-processing, pipelines, barrels, restartable micro-operators and instructions, just a whole world of changes they were not fully aware of.  Intel, AMD, and probably Citrix are still resisting the full width 64-bit Interrupt Buss as a physical Interrupt Buss necessity.  Just as they resist fixing the word "bus" to make it "buss" to distinguish between a computer term and a vehicle that carries people in mass transit.

    Linux is well documented by the Linux Cross Reference; which I wish other developers would use so their code is a little easier to search through when there is a problem.  C of course is very well documented also.  Dennis Ritchie and Linux Torvalds both had an excellent mainframe to work with when creating their respective software works; one of ours running on Espol and Algol, which is why C and Linux look so much like Algol and Espol as well as the Unix Operating System, which Thompson [who actually worked for J. Presper Eckert and John Mauchly, in design from what I remember], becasue AT&T and Bell Labs was using our systems also in the co-development of Unix.

    The big programs are well documented; it's the little ones that seem to lack documentation.  When we worked on the first 64-bit systems, over at least 30 years ago, and the current "newest" concepts a few short years after that, we also had responsibility for the hardware and the software.  The two were pretty much inseparable in research and design; if you changed the hardware, you had to change the software, and vice versa.  Long before Unix, the Master Control Program [MCP] was already doing all of the things made its claim to fame on.

    Right now, there is an entirely new design being developed and it can't be done by borrowing from binaries and tarballs; it must be built from the ground up, that is, like Linux From Scratch.

    While doing so, automake and autotools need to be corrected somewhat, since they lost the object orientation and effectively became too interpretor script dependent and bloated.  That is, instead of being quickly correctible using a module, such as a file of options, they have become troublesome in troubleshooting through millions of characters [at least] of generated scripts.  This is simply inefficient.  If everything in C or Linux is supposed to be a file, then why are there so many "lines of code" dependencies that keep breaking compiles?  The idea was to have interchangeable modules [files], not to pull and replace a card punch card [lines of interpretor code] everytime a compile fails, rearrange them as you would with any batch program, and then try again.

    It just seems that all the variations on compile failures are exactly the same as they were in about 1955; a line of code is a card in such systems, and very comparable to current compile orientation.

    So, the approach to compile itself needs to reorganised into something more efficient than pulling and replacing punch cards.

    The lines above, that you have quoted for your configs, are basically the $OPTIONS cards that preceed the compile deck; options passed on to the compiler before the actual compile time run, flags, some global calls, etc..

    I have lots of experience with package managers; that's not what I am looking for though, since this is all from scratch and is the basis for providing newer documentation.  I have two projects which will probably be ignored by a lot of developers until after the fact, Boss and Meddac.  The next step in computer architectural design and systems design.  The culmination of the last six decades of computer design.  Termed GenN.  I'm trying to beat Chinese, who, if they discover it and its potential, will most likely simply absorb the European and American computer design industries in total.  That's because they will be way ahead of current design approaches, especially those of Silicon Valley West, and all Western Civilisation universities and studies, who do not seem to be listening very well on this subject.

    The projects are massive, and I don't expect to complete them alone.  But until such time as they acknowledged and properly funded, the designs, for the most part, stay in my head, where someone in California can't take credit for them.  I'm being a little selfish about these design concepts this time around.

    I do appreciate your answers, they give me a lot of insight into what to do, how to proceed.  The workload being tremendous here, any and all help is appreciated.  I have solved some of the compile problems by putting everything into a simple shell script with is then called by my and makes the compile a lot easier and a one word command, at least for the configure phase.  Again, the concept of a single file, correctible and highly portable.

    It will eventually grow into NewConfigure and NewInstall, perhaps NewMake and NewBuild.  Which is what I want, a one word command to go from source to installed packages with portability to all Operating Systems and Distributions regardless of hardware.  Done by simply altering the contents of at most a handful of easier to read, use, and troubleshoot options and configuration files, instead of millions of lines of script.

    I'm nearly there, I just need a few more configure examples from other distributions.

    LVL 24

    Expert Comment

    Wow this is a lot stuff to read and I'm still reading however one things is simpyl wrong:
    "apt-get and slapt-get are fine for binaries, but that is not what I'm doing.  I'm compiling and testing often very large packages, such as the MAPPS combination, and lately the incorporation of GCC 4.1, by compilation of this very large package."

    I told you I used apt-get for a  source code build. This can be done for every package you'll find in Debian. I disagree also about the Slackware abilities, I've used it years ago and it has not kept up. You add the proper places in you /etc/sources.list

    and then you fetch and build sources with
    apt-get -b name_of_package

    I've done that with a lot of package and hardly can remember any problem.

    LVL 24

    Expert Comment

    Now a few other remarks. Well I can't tell what you find lacking. I'm running Debian
    on an AMD64 for more than a year now and we do have developed a lcc-linu64 compiler and an lcc-win64 compiler. So I'd argue we are somewhat in.

    Maybe you're right with you insisting on comiling all on your own. If you feel the tools are inadequate you may check new ways of doing things. Currently I tend to use some "scripting" language to drive all that stuff. Of course you need that scripting language then but I feel it's a cleaner and easier to extend route then the autotools stuff.

    Feel free to disagree however ;-)

    LVL 12

    Author Comment


    SlackWare of course has fallen a little behind, above, I said the other distributions showed no sense of loyalty whatsoever when its founder got ill, do I have to say they basically left him for dead to make the point?

    I was there through the machinations of creating the other distributions, nearly all based on or inspired by Slackware.  That includes Debian, RedHat, Mandrake, and the others.  A lot was going on in irc channels and discussions, by the various admins in places like #Dragonlair and such.  You could see each distribution get developed and who was developing it.  Some GNU and GCC people were there as well.  Nearly all of them were scriptors, rather than programmers.  Hence, the massive scripts in both Linux builds and GCC builds.

    I took off on a different course, using compiled machine code to further compile machine code.


    No, Slackware compiles all of them, it's just that Apache is the only one with true portability.  MySQL and PHP simply will not admit to their lack or true portability.  While MySQL and PHP docuemntation is extensive, it always seems to miss the simple stuff, like "how to start MySQL for the first time," and similar.  Plus, Apache pretty much explains their configure scripts, with examples, while MySQL and PHP do not.  MySQL and PHP have my.cnf and php.ini which both use plain text usernames and passwords; a very bad idea to have usernames and passwords in plain text anywhere on any computer.

    It is simply the lack of exemplary configure scripts of actually working versions that remains.  I just think that if you're going to teach someone how to swim you should be able to "show them" how you do it.

    Scripts have been a necessary thing ever since computer Operators came to be and had to run various jobs during the night when none of the engineers were around, business runs, etc..  Thus things like Work Flow Language [WFL] and Job Control Language [JCL] were born [in that order to!].  Some of these first scripts, derived from the original BASIC, led to things like Lisp, Perl, and others, eventually all the shell scripts.  Shell scripts were intended for interactive human intervention, as with the Operators, especially the third shift ones.  They weren't really meant for compilation, which was supposed to be fully automated.  However, to help the Operators, various decks of cards, like the Loader Deck, followed by the $OPTION deck, then perhaps the actual compile deck, were first incorporated by rote [you put the two decks right in front of the main compile deck], and later by calls to editable files on disk.  And this was way back in the 1950's and 1960's.

    A true Cold Start of a system meant all naked disks and starting everything with the Loader deck, followed by the Kernel deck, then various options, which called on PE Tape machines and such to load the disks with the Operating System, and thereafter begin a nearly week long process of initialising a system.  Systems with well over 1,204 disk pack drives [still in use, and much more vast than simple PC hard drives], 65,536 modules per communications processor and the like, and over 2^96 addressing, and more, do not install in one hour.  One week, maybe, if you're lucky.  All started out with a punched deck.  Later, we dedicated a small minicomputer, and even later a microcomputer [not the PC kind you're thinking of, but a micro mainframe] to handle the old card punch Cold Start].

    Basically, each system was unique in its configuration on a "per customer" basis.  They didn't even have the same Interrupt Line assignments, let alone differing paths and users, etc..  Because they didn't have the same number of processors, I/O's, memory modules, nor devices, the install had to be completely portable and configurable by the engineer who was first Cold Starting the system.  Later, applications could be installed by operators as they were needed, usually at the customer site.  The scripts didn't go away, but, for the most part, they were either reduced, whittled down, or replaced, and the replacement was by a lot of compiled machine code to speed up the Cold Start and add applications with full portability process.

    What you had was an archive of configure scripts for every system you shipped.  Thus, the examples were there from the start, before the final sale, and shipped with the customers computer system.  And they all worked because otherwise the cusomter did not accept the system, nor pay for it, until they did.

    And at prices in the tens of millions of dollars, you gave the customer what they asked for, working configure scripts.

    "Slackware is the oldest maintained distribution to date."

    apt-get and slapt-get are a crossbreed between Volkerding and Ian A Murdock of Debian.  I forget the exact history;


    And Ian has to admit that Pat was there a month before him.  Both were SLS.

    I already had over 20 years experience in computer design by that point and Tim Berners-Lee was using our system before 1991 to write the Hyper Text Markup Language at Cern, which was using our computer to run Cern.  Which I configured, personally, and Cold Started.  Helsinki U was, from what I remember, also using the same system when Torvalds attended.

    Back in 1980-1982 we bought DEC, and Honeywell, Sperry, Univac, so that AT&T was using all of our computers, and most of our personnel, and claiming authorship of Unix.  But Ken Thompson, as I recall, was signing off on various things at our complex, so, I maintain there were first six, then nine companies that wrote Unix, which is why Bell could not obtain either copyright or patent monopoly on it; one of those partners was the U.S. government, which had ordered the Universal Language in the first place because they were tired of having to deal with nine companies and systems that would not work together.

    All of them sat down on our computers and did this stuff.  That meant that they had to at least have a basic understanding of our Operating System and languages.  Although they might not admit it, that includes IBM and AT&T, as well as the others involved.

    If I look across all the distributions, I see people working on things derived from our system all the time, even the hardware concepts of Intel, AMD, and Citrix.  Motorola was also one of our main partners, along with the Department of Defense, etc..  Western Electric and the Westinghouse family were right in there too.  Marconi's, Bell's, Westinghouse's, Watson's, all right in the focus of what "we" were doing.  It's a very long detailed history from mainframes to microprocessors which began before Motorola and Apple and wound up with Linux, Apple, Microsoft, first Motorola, then Intel, AMD, and Citrix.  Even the early machines, Altair, Commodore, Mac, are in there.  Things do not happen in a vacuum, they are all derivations of previous works.  There was not really one person who actually invented all of anything.  There were thousands of people involved.

    Too many of these tend to be forgotten.

    Along with them, some great ideas get overlooked or forgotten in later on years.

    The whole idea of Unix which begat Linux was portability.  That idea does seem to be completely forgotten today.  And Cold Start is only defined here, as far as I know.  And I know my definition is exact and right, because we creatd the idea.  We actually designed the circuit for the Power On Self Test [POST].  A single shot and some gating and flip-flops to put the machine in a "known state" before trying to initialize the system.  It required 7 clocks, thus the single shot and timing circuits independent of the system clock, to initialize RAM and other onboard stuff before trying to use RAM and other stuff.

    Very basic stuff.

    With a radically new and different hardware design, you have to start from scratch.  You can use existing systems as models, but you cannot depend on them for design.  Design must be unique.  And that requires the "from scratch" approach and reference to older systems of design, including both hardware and software.

    And all of this leads me to an inspection of existing configuration scripts and the like, with the intent of getting rid of their dead skin, and trying to incorporate only their best ideas, instead of using them by rote.

    Rote memory, like multiple choice questions, is a good place to find out how to ask better questions and get off of multiple choice and start thinking instead.

    And thinking is an absolute necessity to new design.  With radical design change, multiple choice and copying is just not enough.

    It is so much like the antiquated 16-bit Interrupt Buss that the analogy and its stagnation of only 16-bits in Intel, AMD, and Citrix is disturbing.  I, for one, am quite tired of playing with plug and play with only 8 available interrupts in modern systems that have at least more than 8 different devices trying to function simultaneously.  It's as if the Intel, AMD, and Citrix engineers do not even know why there is an Interrupt Buss to begin with.

    Which may be quite true, since most of them have no "from scratch" computer systems design, relying instead on past practices within an industry that has nearly always copied its mainframe predecessors.  This does lead to a lot of errors in 64-bit design for microprocessors and related software, while the divergence in compile methods leads to unportability.  The result is entropy of the once converging upon universal approach to systems design and software design and packaging.  That is, decay and rot of the universal application of software to hardware systems.

    It does show up for compiling and installing as a nasty series of attempts and rewrites for most people.  A few experts can work around it, but not the general group of either programmers nor the public.

    I see millions of request for help with compiling nearly all programs.  Not from the experts, but from those wishing to learn computer systems and programming.  Some are quite learned and adept.  If it's not their fault, then it must be the entire method of configure and compile that has somehow gone awry.

    I believe I can see how that has gone awry and am working on a way to fix it, because it is slowing down everything, especially development.

    I'm just going to award points and save this discourse for book publication.

    LVL 24

    Expert Comment

    I found this quite interesting so I will go on with it if you don't mind. A base thing is IMHO getting things compiled yourself is not an easy undertaking, However with a few experiences you get used to it. I can't tell about how easy it was or is to comile on AIX, HP-UX and the like I've used Linux since 0.99.x (don't know remember) and I can tell you the requirements to get things running are far less demandin then they were.

    I guess it's partly to the fact that I've done this since ages, but I have set up different machines all over again my journey has gone via slackware over systems nobody remembers, then Debian then Suse but I returned to Debain finally and I don't think I like to move on. My Debian has survived since 1998 with minimal problems while updateing, in this time it has gone over three machines, some P II 300 to AMD 1700+ and now a AMD64 box, all my files are intact and my kernels have now gone till 2.6.10 or so. I've some other 32 bit servers running and I've installed it on Linux on at least 2 different notbooks (I always try not to use the most actual software, only exception was the AMD64, this has to be run because of our compiler development.

    Now let'go for you points about MySQL and PHP. I think you'll find tons of examples on how to set up this stuff. I just can say that installing PHP is one of the easierst tasks. So many people are using it, so it simply works IMHO. I'm not a MySQL guy I'm in the PostgreSQL camp. I decided 2 or 3 years ago to go with PostgreSQL especially while I like to check OpenACS. It's also quite usefule because of some OO features, which we have used to build some data model for our pages. Postgres, Apache and th like simply work. So I can not follow you arguments, most can simply install the binaries for their platform, check a few configuration option and then they are done.

    Now if you feel that is inappropriate for you, so be it. Now you say you like to stick to Slackware, so fine, but you can not blame all the package to be difficult to build on Slackware, the other Distributions managed to get that going. And as I pointed out more than once it's not more then an apt-get -b package_name away on my box.

    I do not like to know what autocond and other stuff has to be run and how much garbage is in there, although I have an idea. But I'm not in the mood to work on replacing the zillions of autoconf scripts just to make someone else happy. if you dislike the state of art, you either has to have deep pockets change it yourself or stick to the things as is.

    What I found not ok is ranting about the "sad" state. This is unfair by many means. No the times were not better in the past, they were less elaborated but also less complex, but what software I can run today was a dream not all to long ago. The only thing I agree with it that things are getting more complex.


    LVL 12

    Author Comment

    I don't mind at all.  Some things were a lot better in the past though; savings vs income was 25%, salareis were higher based on adjusted dollars, you could buy a house cash and a car cash, taxes were lower, there was no state nor city tax, but mostly people know how to do things from scratch, to creatively make a part when they couldn't buy one.  The world was a lot less of a rat race and you didn't have to have two jobs just to get by.  You were allowed to express yourself without people getting all paranoid, there were no SWAT Teams, assassins, nor serial killers.  Food was natural and didn't cause cancer, mad cow desease, and babies weren't born without a brain stem because someone was dumping toxics in the Rio Grande.

    It was fun working in the computer industry years ago.  It was just as much fun being a steel worker.  And no one lived in a cubicle in work under the eyes of a video camera.  People were your friends at work, not backstabbing enemies out to get ahead regardless of who got crushed in the mad rush for success.  And success was having a nice vacation, which increased every year you worked for a company, sometimes up to 12 and as much as 20 weeks a year.  The pensions were better and no one was stealing them.  None of the companies were going broke.  Everyone was working and there were more than enough jobs to go around.  Mom & Pop stores were where most people did their shopping and they knew the owners.  There were dances on Friday and Saturday night where people kept their clothes on, no one got shot, and if a fight broke out it was only between two people who boxed fairly.

    I lived through the Golden Age of the American Empire, and I know it.  That age is over, it is definitely sliding downhill really fast now.  I helped make many of the advances in computers and industry.  While they are nice in some respects, they are not all nice.  The uses of them, that is, not the actual advances.  Having a computer in every home was one of our dreams, because of the power and speed it provided.  It could cut the time for printing musical manuscripts down from one year to one hour, it could do your checkbook a lot faster than you could, it could teach you Calculus with pretty good graphics, and a lot more.  But what it failed in was social and economic benefits.  As robots and computers were able to do more manual tasks, the jobs dried up, the mom & pops got bought up, taxes went through the roof to support the people the robots put out of work, the robot owners became slave owners, the dance halls became exotic bars, cars turned into death traps.

    Software is nice, but it's not really a substitute for all of the former things.

    The distributions are fine, I've no problem with them.  The world as it is today though I have a tremendous problem with, it's basic flaw is that people either listen to tv or the computer and no longer listen to each other.  From classrooms to corporate headquarters to government, no one is actually listening anymore.  They are too busy trying to compete for those scarce jobs and the resources that come from them.  A world economy of basic fear, insecurity, driving mass greed, which brings mass ignorance.  I already know where that will wind up as it has in the past, and the 21st century does not impress me at all as anything other than a repeat of the first or second century.  They hung a guy on the cross back then for "ranting" too.  But I'm not Him.  It's very odd to be glad one is not young.  I have no desire to spend my youth in this "Brave New World."

    Thanks Fred for listening.
    LVL 24

    Expert Comment

    people neverthless die later and later, I guess we are not doing all too bad ;-)

    LVL 12

    Author Comment

    Not too bad at all, I'm free, play music all day, goof off whenever I want, and generally ignore most of what does not encroach on my universe.  I do expect to live forever, thanks to my wild youth, keeps the genetic code in renewal state.  This is basically what I do when I can't stand to work on some code or compile any longer, cook, read some interesting stuff here, ask a question, answer a few questions, a lot of repairs for other people, and watch movies while I'm cooking or washing dishes or doing laundry.  It's a weird life, I guess, but I'm quite happy with it in my recording studio home.  The next few years on the road will not be so easy, so I'm sucking it all up right now.


    Featured Post

    Free Trending Threat Insights Every Day

    Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

    Join & Write a Comment

    The purpose of this article is to fix the unknown display problem in Linux Mint operating system. After installing the OS if you see Display monitor is not recognized then we can install "MESA" utilities to fix this problem or we can install additio…
    The purpose of this article is to demonstrate how we can upgrade Python from version 2.7.6 to Python 2.7.10 on the Linux Mint operating system. I am using an Oracle Virtual Box where I have installed Linux Mint operating system version 17.2. Once yo…
    This video is in connection to the article "The case of a missing mobile phone (". It will help one to understand clearly the steps to track a lost android phone.
    In this seventh video of the Xpdf series, we discuss and demonstrate the PDFfonts utility, which lists all the fonts used in a PDF file. It does this via a command line interface, making it suitable for use in programs, scripts, batch files — any pl…

    755 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    24 Experts available now in Live!

    Get 1:1 Help Now