Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
User Journal

Journal Journal: DNA Dragnets and Identification 8

Has anyone out there heard about this new practice police are using to catch criminals? It's called the DNA dragnet.

Here's the short form. Over the last few years, there have been several cases where police request that large numbers of people, sometimes approaching a thousand, give their DNA to rule themselves out as suspects in a case. Those who refuse to submit to the procedure are treated as suspects.

Now, I'm not quite sure how I feel about this--I only found out about it last night while watching 60 Minutes. I am sure, though, how I feel about police using this to solve crimes without having to get permission from a court or a specifically scoped law controlling the practice. Last time I checked, to become a suspect in a criminal case police had to have probable cause--anything short of that is harrassment. You can't even be pulled over for a traffic violation unless the cop has met this burden; how can you become a suspect in a rape or murder investigation simply because you were anonymously caught in a DNA dragnet and refused to cooperate?

Apparently, England has no such qualms. They have taken to routinely using this technique when they run out of other leads to follow. Germany has run DNA dragnets including thousands of people. We in America need to address this to express how we feel about this practice to our representatives and authority figures before it becomes standard practice.

I am not saying that I think it should never be used...though this is very possibly the conclusion I will come to on the matter after careful thought. I've long been conflicted about privacy issues involving identification of individuals. For instance, my state, California, now requires you to give a thumbprint in order to get a driver's license. Is this a problem? I'm not sure...privacy advocates argue that requiring such a thing is the action of a police state that considers its citizens as criminals before they do anything wrong. Somehow, though, this argument has never resonated with me.

To understand my viewpoint, ask yourself: what is the purpose of issuing IDs in the first place? The point, dear reader, is to identify you. I imagine the driver's license originally existed merely to identify you as a licensed driver, but I think that we all know that this is no longer the case; driver's licenses are now used for that purpose and as a state ID, same as the one you're entitled to if you're not a licensed driver but want a form of legal identification. So it seems to me that in order for a privacy advocate's argument against fingerprinting to be self-consistent, they ought to be advocating the abolition of IDs altogether. I've yet to meet anyone that can propose a reasonable means of doing away with IDs altogether. So the same argument privacy advocates make about fingerprints could be applied equally well to the head shot present on every photo ID, the address...the concept of the ID itself.

All this means that even most privacy advocates accept that some form of identification is necessary. So we find ourselves on a sliding scale from no IDs at all at one extreme to a global ID that uses every bit of technology available to disambiguate you from the entire global population (fingerprints of all ten fingers, DNA information, 3D model of your face and body, etc). If we agree that the need exists for the state to identify citizens, what degree of identification is necessary and reasonable?

One of the more nuanced arguments I've heard intelligently draws the line at DNA evidence. This argument is based on the notion that citizens should be willing to give enough information that authorities can reasonably identify them while limiting the exposure for abuse as much as possible. Some undoubtedly feel that a photo and an address are already too much, but I think most reasonable people would recognize that the true jump in exposure to such abuse doesn't occur on this sliding scale until we get to DNA. The potential to abuse DNA is much more serious than, say, fingerprints. Your DNA could be given to a health insurance company that discovers you are predisposed to a certain illness and cancels your insurance as a result. (Is it only a matter of time regardless before insurance companies require a DNA sample before they'll insure anyone? I think that once DNA becomes cheap to process, this will probably come to pass at some point anyway. Perhaps this issue of the DNA dragnet is simply prompting a conversation that we need to have anyway.)

Another factor in this discussion of identification is the level of granularity. Should we move to a national ID card, or leave it at the state level? Personally, I'm not against the national ID card because I think the benefits outweigh the risks. Besides, consistency requires that, if you are against a national ID card, you are also against interstate cooperation. Have you ever gotten a speeding ticket out of state only to have it show up on your driving record in your home state? Of course no one likes to get nailed with a ticket, but I think you'll agree it would be unreasonable to expect that each state will operate completely independently in this regard. I mean, what if a loved one of yours gets kidnapped and taken out of state? Would you advocate that the state authorities follow the trail to the state border, and then simply throw up their hands and head back to the office? Of course not...in that case, you'd be all for interstate cooperation and information sharing. States spend millions of dollars every year sharing databases...we might as well save all these tax dollars and just go national. There's something that rings false about a position that in principle agrees with the need for collecting and sharing data, but finds some degree of comfort when states are unable to exploit this capability due to technological difficulties.

National IDs are more in line with our shared consciousness anyway--when travelling abroad, if someone asks me where I'm from, I start by saying "America," not "San Francisco" or "California." (Contrast this to natives of India, who will often specify their home state within the country...in my experience, Indians typically do not share a strong national identity.) Besides being cheaper, a national ID would perhaps allow us to get better control over the illegal immigration situation (which is why national IDs will never happen--both parties lack the political will to deal with the appalling state of our immigration situation).

Anyway, as I intimated at the top of this essay, I'm pretty well undecided on how exactly to approach the issue of DNA dragnets. Do you have any thoughts to contribute that might help me form an opinion on this?

User Journal

Journal Journal: Big-O Versus Medium-O

Did that title get your attention? Sorry to disappoint, but this essay isn't about that. :p phfffbbt

Originally, this was to be part of Big-O Notation , but I suspect several people reading this will already be quite familiar with Big-O notation and won't need the primer, so I've split this off.

Order-of-magnitude analysis can be used in two different ways, the way I described in Big-O Notation , a computer science-y, abstract kind of way, and the way I alluded to in Indirect Thinking , a practical kind of way, used to arrive at an actual concrete value. This essay addresses the latter.

For reasons explained in previous essays, it is often useful to gauge the order of a number. In its simplest form, this can be done directly on a number, such as 1040. What is the order of 1040? If you think back to algebra, you'll remember that the order of a polynomial is simply the highest power of the independent variable. For instance, the order of

f(x) = x^3 + 2*x^2 - 4

is 3 because that's the highest exponent of x, found in the x^3 term. You might also remember that regular, everyday, run-of-the-mill numbers such as 1040 have a "polynomial expanded form in their base." Put simply, that means we can write 1040, a base-10 number, as a polynomial (note the "independent variable" in this expansion is the base--10--and the coefficients of those terms, which are bolded, form the digits of the number itself):

1040 = 1*10^3 + 0*10^2 + 4*10^1 + 0*10^0,

or more simply, we can write the expansion as a function of its base b=10 and we drop the zero terms:

f(b) = 1040 = b^3 + 4*b,

So, the order of 1040, therefore, is 3.

This information about 1040, or any other number, is useful mainly for the purpose of comparison. If we find ourselves in a situation where we must compare two numbers of vastly different scales, order-of-magnitude comparison brings such comparisons down into the realm of numbers we can grasp directly. For instance, we might wonder how many Earths could fit inside the surface of the sun, approximately? Is it a thousand? Maybe ten thousand? (That seems like too many, doesn't it?) This is a question that has undoubtedly been answered before, but it would probably take more research than we're willing to do to answer such a frivolous question just for the sake of interest. That is, if we try to find the result of this calculation written up somewhere. On the other hand, it's very easy to find out the radius of the Sun and Earth, and with a little order-of-magnitude calculatin', we can quickly find out the answer for ourselves.

So, I type sun into the Wikipedia and find out that the radius of the Sun is about 110 times that of the Earth [source]. (Note that I need not concern myself with the long, messy number that represents the actual radius...another nice thing that often occurs when doing order-of-magnitude calculations.) I happen to know that the volume of a sphere is 4/3 the cube of its radius (if you didn't know this, this information is also readily available). So if we let rE be the radius of the Earth, we can quickly figure the volume of the Earth, vE, compared to the volume of a sphere with a radius 110 times that size (the volume of the Sun, vS):

vE = 4/3*rE^3
vS = 4/3*(110*rE)^3 = 4/3*110^3*rE^3

Since we're only concerned with order-of-magnitude here, I can simply drop all terms that are of a smaller order than I want to deal with. I can even turn the 110 into 100, the justification being that 100 is very close and happens to fall exactly on an order-of-magnitude boundary for base-10:

vE = rE^3
vS = 100^3*rE^3

If we divide these two quantities, we get our answer:

vS/vE = 100^3*rE^3/rE^3 = 100^3 = (10^2)^3 = 10^6

In other words, the volume of the sun is about 6 orders-of-magnitude larger than the volume of the earth--we should be able to fit about one million Earth-sized planets inside the Sun's volume. (If you do the actual calculations, you'll see that the real answer is roughly 1,331,000--this assumes that each Earth-sized planet is ground up and dumped into a bin the volume of the Sun. It would be less if we were to consider packing hard, Earth-sized spheres into a Sun-shaped and Sun-sized volume because spheres do not pack the space as tightly as Earths that made their way through a cosmic coffee grinder.) Wow--how reasonable it seemed before to say that 10,000 Earths probably couldn't fit into the Sun, hm? As it turns out, that many Earths would only fill up about 1% of the Sun's volume, and it wasn't difficult at all, using order-of-magnitude, to see how far off we'd been.

So, that was an interesting diversion, but now we get to the real point. We accept that using order-of-magnitude brings us a powerful new tool--it allows us to compare numbers of vastly disparate sizes. This works because the numbers involved in order-of-magnitude calcuations are powers of 10, which are so small and managable. The difference between a thousand and a million becomes simply 3. Even if we consider the vast theoretical extremes of our universe, the numbers remain very manageable. The "fundamental unit" of distance (roughly speaking, the smallest distance that can theoretically occur) is about 10^-35 meters. The diameter of the known universe since the Big Bang has expanded to roughly 100 billion light years, or about 10^27 meters. That means our universe contains, when speaking about matters of physical distance, only about 60 (give or take) orders of magnitude total. Only 60.

Think about that for a moment. This means that, no matter what size an object we happen to be discussing, that object's relationship with the smallest theoretical measureable distance can be represented using a number between 0 and 60. Or, alternatively, we can compare the size of any object at all to the size of the known universe using a number between 0 and 60. This brings such mind boggling differences into the realm of what our minds can manage.

So much so, in fact, that it lead me to think that our minds would probably have no trouble at all dealing with numbers that are even slightly larger than from 0 to 60. The 60 orders of magnitude present in our universe is only true if we assume we're working in base-10. But why work in base-10? Clearly, if we used a smaller base than 10, we might gain some fine structure without costing us any manageability. What if we went to the smallest integral base available, base-2, instead? Then what would we find? Would the orders get so large that we would find ourselves outside the realm of the manageable? Well, let's see.

10 is about 2^3.33. This is not exact, but it's convenient to think of it as a nice round fraction, so I'll use 2^(10/3). If I remember my basic numbers correctly, we can convert powers of ten to powers of two simply by multiplying the exponent of ten, then, by 10/3. Let's see if I'm right. We'll figure out, for various powers of ten, what the corresponding powers of two are:

10 = 2^(10/3)
10^2 = (2^(10/3))^2 = 2^(20/3)
10^6 = 2^(6*10/3) = 2^20

These equations aren't exactly true, but they're all pretty close (10^6=1,000,000 and 2^20=1,048,576, not exactly the same, but close enough for our purposes). That means our range expanded from 0 to 60, using base-10, by a factor of 10/3 to 0 to 200. This is definitely still manageable! And, it provides a much finer scale for us to work with. For instance, in base-10, the following numbers are all of the second order of magnitude: 110, 130, 480, 780. All of these numbers fall into different base-2 orders, though...respectively: 6, 7, 8, and 9. Even with this finer level of granularity, we're guaranteed that at the extremes the numbers will still always be restricted to the realm of the manageable.

You might agree that there's benefits to comparing orders in base-2, but argue that it's not practical because people have no reference point for working in this base, whereas base-10 comes naturally to most people...you just count the digits. I would reply, though, that with the tiniest bit of effort, anyone can think in base-2. Because of computers, everyone already knows several benchmarks along the progression of the base-2 system. No one buys 100MB or 200MB of RAM, we buy 128MB (2^7), 256MB (2^8), or 1GB (which is 2^0 gigabytes, 2^10 megabytes, or 2^20 bytes). Furthermore, the two bases tend to align remarkably well every three orders; 1000 is roughly 2^10, 1 million is about 2^20, 1 billion is about 2^30, and so on.

It's not as far-fetched or abstruse as one might think, and it makes order-of-magnitude comparison that much more useful.

User Journal

Journal Journal: Big-O Notation

In Indirect Thinking , I used an order-of-magnitude check as an example of an indirect way to verify a calculation. Is 24*29 equal to 14,984,334? Most people won't have to do the multiplication to know that the suggested result is way too large...they have performed an order-of-magnitude check to make this determination.

In computer science, we are taught to think in terms of order-of-magnitude when judging the performance of algorithms. The result of that analysis is often recorded using what is called Big-O notation, in which the total amount of time the algorithm takes to run is related to some independent variable of the problem. For instance, you have a list of names (a telephone book, for example) and you want to know whether the name Gummercinda Lipschitz is in that list. How long does it take your algorithm?

Well, there are a number of algorithms that can be applied here. The simplest and most straightforward is to simply go through the list from beginning to end, comparing each name as you go. The variable in this problem is the number of elements that happen to be in the list--note that this value is independent of the algorithm itself...the algorithm will execute over any sized list. Of course, the longer the list, the longer it takes.

When doing an order-of-magnitude anaylsis, we say the time T for this algorithm to run will be directly proportional to the number of elements n in the list: T=a*n, where a is some scaling factor. The actual value of this scaling factor depends on all sorts of things: what kind of computer is running the algorithm, how many other processes are running simultaneously, how much memory is available to the algorithm, etc, etc. This is a very convenient way of mathematically representing our analysis because it binds together all of these unknowns into this one little variable a so we can focus simply on the interaction between the number of elements and the performance of the algorithm. Certainly, all of that other stuff is important in terms of actually running the algorithm and getting an actual time...still, there is value in understanding this particular fundamental relationship contained in the other two variables.

So, using Big-O notation, instead of the above formula we obscure the proportionality constant altogether...this focuses the mind on the relationship that's under study: O(n)=n, we would write. For the sake of comparison, let's consider an algorithm that tells us whether the first character of an input string is a capital letter or not (it simply returns true or false). If n, in this case, is the number of characters in the input string, we can clearly see that the algorithm need only concern itself with considering the first character alone. Hence, no matter how long the input string, the algorithm will run in the same amount of time, as it will simply ignore whatever comes after the first character. Represented in Big-O notation, this would be: O(n)=1. Expanded into standard mathematical notation, we would write T=a*1, or simply T=a...all of these represent the same relationship. This algorithm is said to run in "constant time" with respect to the length of the input string because, for a particular machine with some fixed amount of memory, other processes running, etc, it will always return a result in the same constant amount of time regardless of the length of that input string.

In actuality, there is one more notational contraction applied in Big-O notation...the formula on the right hand side is usually represented directly in the O() function instead of explicitly listing the independent variable there--it is understood that n is the independent variable by convention. So O(n)=n and O(n)=1 become simply O(n) and O(1), respectively. Also, it is also sometimes unnecessary to roll all scaling factors into the unseen proportionality constant. For instance, if my name search algorithm was designed only to tell me if the name was present in the first third of the list, it would not be wrong to represent this algorithm's "order" by writing O(n/3). This 1/3 factor represents a fundamental part of the problem under consideration because it's related directly to the interaction between the performance of the algorithm and the independent variable, so it's not wrong to include it. On the other hand, for most purposes that Big-O analyses are done, this factor would be of little consequence so it would be excluded anyway.

In the previous example of the name list, we might be able to provide a different algorithm based on additional knowledge we have. For example, let's say the list of names accessible to our algorithm has certain features; it is always sorted alphabetically and we can always find out before we begin the search how many total elements it contains. In this case, we can use the divide-and-conquer technique. We skip to the middle of the list and see if Gummercinda's name is in the first or last half, then we recursively repeat this process on the half that might contain the name until we get to the position it would alphabetically occupy. This way of doing things does not vary directly with the number of elements in the list; rather, it varies directly with the logarithm of that number. (This is because each time I double the size of the list, only one more iteration is required for my algorithm.) This equates to O(log n).

So, what is the point of all this? Big-O is actually a very good way of distilling order-of-magnitude calculations into a reductionist form...everything but that which is absolutely necessary is rolled into an unseen constant. So, the basic character of two algorithms can be compared simply by comparing the graphs of their Big-O expressions. Even more complex information can be teased from these...for instance, if I have two algorithms that solve a particular problem, one associated with O(n^10) and the other O(2^n), I can tell right away that the former will, for large n, be more performant. But I might be working with a situation where n will never exceed a hundred, and I might wish to know for what n the second algorithm will begin to run slower than the first. With a bit of calculating, it's easy to see that the second overtakes the first in terms of running time around n=60.

This points up that any one of the quick calculating methods introduced in the previous essay Indirect Thinking can be developed further. It's true that the more developed version may not be as quick to check off-the-cuff, but it can yield a perspective and information about the problem that might otherwise lay undiscovered altogether.

User Journal

Journal Journal: A New Kind of Program, Part II 1

Before reading this, you might want to read Part I of what has apparently become a series.

This new approach to developing applications brings unprecendented customizability. So much so, in fact, that it's hard to handle on all of it, and it even calls into question whether two different installations of the same application can really be called the "same" application beyond some point. I've been thinking on this since I posted that last thought, and I think I may have something.

What if applications were multi-user like operating systems are multi-user? In other words, what if you had to log in to an application before you could use it? This may sound like a horrible inconvenience, but stick with me...you'll see where I'm going with this shortly.

I download a word processor, the core of which is actually just a framework for word processing plug-ins. This framework sports a standard plug-in that connects up with a web service, hosted by the application developer, and logs in using my account on that web site (yes, I can optionally register an account with the site to download the word processor). Once this app is installed, I browse the list of available plug-ins and customize to my heart's content. Each time I install a plug-in, the web service module updates my on-line account to reflect the current customization, including all plug-ins installed and configuration information.

I go to my friend's house and get on his computer. He has installed the same word processor and customized it to his liking. But when I start up the app, it prompts me for a login and password, which I provide. It downloads all of my plug-ins and configuration. Some of these are necessary for me to start using the application for the first time on my buddy's computer, so for those I have to wait. Once it's done installing those and updating the configurations, though, it lets me start using the application. Meanwhile, in the background, it keeps downloading and installing the rest of my plug-ins, and my configured functionality starts magically appearing as I use the application.

Ah, but every problem solved spawns a new set of problems. You're thinking, Wait a minute...I already have enough trouble keeping track of all the accounts I already have to maintain for sites like /., my online bank access, the Wall Street Journal...now I have to create logins and passwords for each and every application I use? I feel your pain. As someone who tries to go by the handle sever everywhere I go, I've recently been stung several times by the new requirement at several sites that handles be at least 6 characters long. Also, for some reason, some sites will not accept special characters in passwords such as |, &, or $. So this means that I have to have at least 2 logins (the login I naively used to intially create accounts, and the one I had to invent with more than 5 characters) and two passwords (one secure one with lots of special characters, one less secure one with none), making a total of four combinations. Oh, and let's not forget the login I use for accounts that are jointly accessible to both me and my fiancee (that login has 6 chars, but I still need two passwords...argh). I can tell you, I usually have to login using the guess'n'check method if I haven't used an account for a little while.

Technology to the rescue! The last few times I've installed Linux, I've noticed that most distributions now come with an application called keyring. It's a fairly simple idea--it's a little database that associates all of your username and password combinations with the appropriate site. It even performs the login for you automatically, I believe, when it senses you're being prompted for a login (cookies be damned--this is much better). Of course, it keeps all of this information securely, encrypting every bit of data that passes through it. I'll bet it even prefers HTTPS connections or uses a web service to perform the login if they're available.

What if we were to marry this keyring application with the above idea of an application login model? It works like this...you go to the website of your favorite keyring application and create an account. You download the keyring application and install it, and then from then on, whenever you add a account to it, it updates the information remotely for you. Voila! Now, even at your friend's house, you have access to your keyring (assuming he has that keyring application installed). You only must remember that one username and password to get access to everything on the Internet associated with you. Suddenly, having to log in to every application doesn't seem like such a burden--it's done automatically for you.

Here's several more possibilities, some food for thought. (1) Could a standard be developed for such keyring applications? This way, you'd be free to install whatever keyring app you like and they'd all share your encrypted information, meaning that when your friend visits you or you visit your friend, regardless of what keyring app he uses, you can still log in by providing the username/password and the URL for the database of your preferred keyring app. (2) Could such an idea be incorporated into OS logins? This way, you wouldn't log in to your friend's computer directly either--the login prompt would provide an option whereby you could login using your keyring account, whereupon it would look up the login credentials you chose for your friend's computer, just like any other website. (3) Could such a keyring application automatically create accounts for you? Let's say you find a new discussion board on the web that requires you to create an account before you can start posting. It would be nice if you could simply go to your keyring app, type in the URL of the site asking you to create an account, and it would do it automatically. It could even use a nonsense handle and password--who cares? You'd never need to know it anyway to access the site...let the keyring app do the work whenever you need to log in!

So, let's run through a quick example of how this might work. You download and install your favorite keyring application. You decide to install some applications: Firefox, a word processor, and a file server. As you download each one, your keyring app automatically creates an account, encrypts, and stores your credentials. You configure each app (including the keyring app) with the plug-ins you prefer and customize them according to your wishes. You create an account for yourself in your file server so you can access your own machine from remote locations, and you add the credentials and URL to your keyring app. You remember to add your login credentials for your own computer because your OS integrates access to your keyring--so now, you log in even to your own home computer using the keyring option. You also add your login credentials for the home computers of your friends.

Then you head over to your friend's house. You sit down at his machine and log in using your single keyring username/password and the URL of that keyring applications database. It fetches your login information for your friend's computer and logs you in. You open the word processor and your keyring provides your credentials, at which point it downloads and configures itself to your liking. You decide to save your document on your home computer. You map a network drive (I know, I know, Windows-speak) to your home machine, at which time your keyring logs you into your file server at home. You save the document to that network drive.

There is no reason this idea can't be applied across the board, to everything from logging in to the OS to using a file browser, web browser, command shell...whatever. The OS could even configure itself with applications. For instance, you use Firefox at home, as soon as you log in to your friend's computer it downloads and installs Firefox for you. This idea could even be applied to licensed software...say you have your very own license for IDEA IntelliJ. As long as keyring has all of your login credentials, there's no reason you shouldn't be able to install that and run it from your friend's machine as well.

So what are the flaws with this idea? I don't see any that are insurmountable, though the most important I've come up with are still worth mentioning.

This could use up a lot of space. Let's say you've created accounts for your 20 closest friends, and each one has their own set of plug-ins for Firefox, a word processor, a calculator program, etc. That's a lot of plug-ins that are floating around on your machine, only a subset of which get used at any one time. If one of these friends only visits you once a year, do you really want that person's 100MB of Firefox plug-ins taking up space? I would address this one by saying one of two things: (1) hard drives are cheap and getting cheaper, so yes, you could simply spare the room and (2) an application-independent plug-in manager could be invented that tracks all the plug-ins on your system and removes rarely used ones...if needed, they'll simply get downloaded again at the appropriate time. An app-independent plug-in manager is also cool because it could automatically download updated versions of plug-ins as they are released. Your applications are all in continuous upgrade mode all the time without you doing anything.

What about speed? Wouldn't downloading all these things add up to a lot of bandwidth? Yes, it would...but, only the first time you logged in to a particular machine. After that, assuming the plug-ins don't get deleted, they're there waiting for you. Besides, bandwidth is soon going to be a lot cheaper, with home connections now getting pushed up into the 3mbps range. Soon, we'll fly past that.

What about security? If I were a savvy programmer, I could create a Firefox plug-in, for example, that does malicious things or gives me backdoor access to the file system of the machine on which it resides. Normally, such a plug-in would be found out because there'd be lots of eyes on it, but let's say I don't put it out for general use...it's only available to me. So when I log in to my friend's machine, this plug-in gets installed by the keyring app and boom, now I've got back-door access to his machine any time I want it. This one I'm not sure how to solve, so discussion is welcome.

The more I think about it, the more I believe it; as applications evolve, we'll need applications that know how to behave as they're configured and this application-level login model might be just the thing.

User Journal

Journal Journal: Indirect Thinking 1

The unexamined life is not worth living. --Socrates, Apology, 38

I found myself recently wondering if a question exists that can be answered without directly addressing the question, but instead by considering the nature of the question itself. To give an example of what I mean by "directly addressing" a question, consider the following scenario.

I ask you: is 56*19 = 1040? To directly approach the answer to this question, you could do a number of things. You could calculate 56*19 and compare the result to 1040 to see if they are the same. You could do something algebraically equivalent as well, such as dividing 1040 by 56 to see if the result is 19. My use of the word direct in describing this approach has to do with the fact that these approaches depend upon executing exactly the operations (or the algebraic equivalent) present in the problem statement.

This is how we humans are taught to solve problems (at least, how this human was taught). This approach solves the problem, but not in a way that relies on creativity or intuition...it instead requires the solver to apply elementary knowledge in a simple and straightforward manner to find the answer. The solution is found through an application of knowledge and could more or less be carried out by someone that isn't necessarily intelligent but, rather, has memorized a sequence of steps without any real insight into the problem. Even so, using this approach tends to give people a feeling of satisfaction because it answers more than what the question explicitly asks (which could be addressed with a simple yes or no) and goes a step further in answering the unasked question: What is 56*19?

Humans have the habit of assuming the presence of these implicit questions. If you ask this question of your friends and relatives, I'm fairly confident that a good number of them will not simply respond with a simple yes or no, exactly enough to answer the question. Instead, they're likely to reply with a number.

So we've covered how to answer this question using a direct approach. What would be an indirect approach? Well, here's one example. I might notice that, whenever one multiplies two numbers together, the last digit of the product is always the same as the last digit of the product of the last two digits of the arguments. That was a mouthful, so if you don't feel like parsing it, fear not and read on. The rightmost digits of 56 and 19 are 6 and 9, respectively. The product of these two digits is 54, the rightmost digit of which is 4. So without actually multiplying 56 and 19 together, I can say with certainty that the final digit of the product will be 4. In the problem statement, the suggested product is 1040, which does not have a final digit of 4, and therefore cannot be the correct product.

This approach, while a bit more convoluted, is a bit more appealing to me because it seeks to provide an easy means of answering not just this particular question, but an entire class of similar questions. Furthermore, it requires a minimal amount of information from the problem statement while still providing the answer.

Consider, for instance, a slightly different problem. I ask you: is 734...211 (a 1000-digit number, of which I am only telling you the first and last three digits) times 349...214 (another 1000-digit number) equal to 936...001 (a 2000-digit number)? Using the direct approach of multiplying and comparing your result, you might conclude that this question is unanswerable because you can't carry out the multiplication; you don't even know what the actual numbers are.

Using the indirect approach, however, one can immediately see that not only is the answer to the question obtainable, but easily so! Even grade schoolers can do the required calculation in their heads. On the other hand, using the direct approach, even if I acquiesced and provided you all of the digits of the three numbers, finding the answer would require quite a long bit of calculating.

There is one little hang-up with the indirect approach, though. Perhaps you noticed it already...the class of problems that this approach applies to must be carefully considered. For instance, one might think this approach can be used to quickly answer any such question, such as: Is 21*21 = 421? You might be tempted, based upon the indirect approach, to say yes, it appears so. But do the calculation, and you'll discover that it is not so, it's merely that the actual product (441) happens to have the same final digit as the suggested, but incorrect, product. So this indirect approach is only useful in answering this question when it contradicts the suggested product, but outside the class of problems that fall into this cateogory, this approach has little to contribute.

Note I say it has little to contribute, but not nothing. If we assume that incorrect suggested products sport a last digit that is evenly distributed over all ten of the possible values, then the suggested approach will answer the question 9 times out of 10 (when the final digit of the suggested product does not match the actual product's final digit). That means when the problem doesn't belong to this method's problem space, while we cannot say the proposed product is correct, we can say we have a slightly higher degree of confidence that it is correct than before we applied this method. This alone is negligible, but consider if we had a long list of different methods such as this one to apply to the problem. If all of them failed to show the suggested product is incorrect, and there were enough of them, we might be able to have a very high degree of confidence that the proposed product is correct. It is still not a sure thing, depending on the completeness of the methods we've assembled in combination, but it's better than nothing and in some cases still far less work than the direct approach.

So we might conclude from this that using several indirect approaches in concert can be a very effective way to deal with a seemingly intractable problem. For the sake of example, let's consider a second indirect approach to the same kind of problem: is 22*47 = 1,048,224 true? Can a 2-digit number multiplied with a 2-digit number result in a 7-digit product? If you study this a bit, you can conclude that any m-digit number multiplied by any n-digit number will result in a product not larger than an (m+n)-digit number. (This is also known as checking by "order-of-magnitude.") Since the proposed product above is not 4 digits long, it cannot be correct. So here is a case where the first indirect method could not be applied, but the second one works. These two approaches, taken together, effectively expands the problem space that we can answer conclusively. If the problem at hand involved two 1000-digit numbers, one could develop quite a long checklist of these techniques and still find that the total amount of work involved in performing each and every check is still far less work than the direct approach.

This is all well and good for mathematics, but can this kind of indirect reasoning be applied to knotty "real life" questions, such as those that might arise in a discipline such as philosophy? I'm glad you asked, because this is exactly the thought I found myself considering a few days ago. And, I think I've found an example question that is related to the quote at the top of this essay.

The philosophical question is: is it useful to examine one's own beliefs? To directly answer this question, one might choose a particular position ("yes, examination of one's own beliefs is useful") and then set out to support this position. Then, one might choose the opposite position ("no, self-examination is futile and a waste of valuable time") and set about supporting that viewpoint. Once one is satisfied that both positions are bolstered to the best of one's abilities, one could compare the two arguments and see which one is more complete, more self-consistent, etc, and ultimately buy into one or the other. (It is worth pointing out, however, that addressing such questions in this way have been known to occupy great minds for years and even lifetimes without being resolved to the person's satisfaction.)

Now, let us consider the indirect approach to such a question. For the sake of this argument, let's assume that there exists support in favor of not examining one's beliefs, and this support trumps that of the opposing position. (That is not to say such support actually exists--only that we are assuming it as a basis to move forward along the indirect path.) If such an argument exists against self-examination and we were to discover it, we would without a doubt consider this newfound knowledge useful--why else would we put in the time and effort to uncover such a conclusion if the result of that work was not interesting to us?

But, if we were to achieve this state of affairs, we would immediately find ourselves in a quandary. The argument we have discovered is itself an example of the self-examination of one's beliefs yielding a result we find useful! It seems that the mere act of finding a good argument for this position disproves such an argument.

Now, let's consider our original assumption, which is: the argument against self-examination is better than the argument for it. All of the reasoning above is based upon this assumption, but what if this assumption is incorrect? If this assumption is wrong, then it means the opposite is true--that the argument for self-examination is better. So, even if our original assumption is invalid, we arrive at the same conclusion. Any way we cut it, the only logical stance is that self-examination of one's beliefs is useful, which is the answer to our question. We were able to arrive at this conclusion without even attempting to answer the question directly...instead, simply by considering the nature of the question itself, we were able to arrive at the correct answer.

Such methods of indirect reasoning are the hallmark of great thinking in many fields. Mathematics even sports formal methods of reasoning based upon indirect thinking...to name a couple: reductio ad absurdum and mathematical induction. Both have at their cores indirect thinking. ("Reductio ad absurdum" literally means "reduce to absurdity"...it is a method of answering a mathematical question by taking a position and showing that it irrevocably leads to impossibilities. Mathematical induction proves that a thing is generally true by proving that, if that thing is true in a single case, then it also must be true in each successive case. All that remains once this is done is to show any single example.)

What I have termed indirect thinking in this essay is, I think, an essential attribute of all truly deep understanding. It is an example of the synergy that can arise in the mind when that mind not only knows how to find an answer through brute force means, but also intimately understands the nature of the question and the essence of what allows the brute force method to work.

User Journal

Journal Journal: Should Paul Hamm Give Back the Gold?

I've got Olympic fever this year for some reason. I've never really been into the Olympics that much, or watching sports in general, for that matter. This year, however, is the first year that the Olympics has occurred while I was in possession of a TiVo. This amazing device allows me to watch about 8 hours of Olympic coverage in about 100 minutes (I timed it). And yes, this includes the events, the post-event interviews where reporters require long-winded responses to questions while an athlete that's just completed a 5000m race struggles to catch their breath, as well as all of the backstories--which, to NBC's credit, has been dramatically and admirably scaled back this year in favor of broadcasting, um, the Olympics.

Being interested in the Olympics this year makes me "de facto" interested in the gymnastic, aquatic, and track and field events, as these three groupings make up probably more than 75% of the Olympic broadcasts in America. (By the way, what do I have to do to see complete coverage of Olympic ping pong? Move to China?)

Having said that, this controversy over American Paul Hamm's gymnastics all-around gold just won't go away.

To give some background on the issue, in the men's gymnastic all-around, South Korean Yang Tae Young's parallel bars routine was incorrectly given a starting value of 9.9 when it apparently actually deserved a starting value of 10.0, reducing his score in that rotation of the event by 0.1 of a point unfairly. After the final event was over in the all-around competition and the scores were tallied, that 0.1 from the parallel bar routine was enough to knock him from gold medal position to bronze.

But this is not the whole story, and everyone from the International Gymnastics Federation (FIG) to the U.S. Olympic Committee (USOC), though at loggerheads over how the issue should be handled, seems to be ignoring the fact that Mr. Hamm does indeed deserve the gold medal.

This is true for a number of rule and policy reasons binding on the competition:

  1. If the South Koreans wanted to dispute the judges' scoring of the parallel bars, they must file a complaint before the end of that rotation within the all-around event. They did not.

    ...FIG rules state that any protest must be filed before the end of a rotation -- in this case, the parallel bars -- which the Koreans failed to do. [source]

  2. The judges only discovered their error after review of the videotape, which is not allowed in judging Olympic gymnastics events:

    "The people I'm a little bit upset with is the FIG because this matter should have never even come up," Hamm said. "Reviewing videotape isn't even allowed in the rules. Rules can't be changed after the competition is over. Right now, I personally feel I shouldn't even be dealing with this." [source]

  3. The rules also state that Yang Tae Young does not meet the conditions necessary to lodge a request for a second gold medal:

    Bruno Grandi, president of the International Gymnastics Federation, told The Associated Press on Monday night that rules prevent him from asking for another gold medal to make up for the scoring error that cost Yang Tae-young the all-around title.

    "I don't have the possibility to change it," Grandi told the AP. "Our rules don't allow it." [source]

    On top of this, International Olympic Committee (IOC) president Jacques Rogges told Bob Costas that current IOC policy, since the judging scandal in the Salt Lake City 2002 Winter Olympics, is that medals standings cannot be changed after they the medals have already been awarded unless judging is found to be tainted or doping has occurred. Specifically, duplicate medals will only be given in the former case of corrupt judging.

Ok, so all of the rules and regulations of the sport seem to be falling down on the side of Paul Hamm, which means he officially gets to keep his gold medal without question. But we all know he's not the better athlete, and if the judging was flawless the gold would really have gone to Yang Tae Young, right?

Wrong. And I quote:

So why not a second gold medal [for Tang Tae Young]? Why not accommodate the upset Koreans and send everyone home happy? Well for one thing, you can make a pretty good case that, if you're going to go to the videotape, Yang shouldn't have won.

Yes, the videotape of the parallel bars showed the judges erred by assigning a 9.9 start value. But it showed something else, too. In the course of his routine, Yang had four holds on the bar, when the rules allow for a maximum of three. The deduction for that mistake? Two-tenths of a point.

The judges missed it.

It is not enough to say Paul Hamm should keep his gold medal. He's a deserving champion. Period. [source]

After knowing this, doesn't the FIG's and South Korea's calls for Hamm to give his gold to Yang Tae Young in the name of sportsmanship seem a bit sore loser-ish? I say Yang Tae Young should file an official request to have his true scoring--which nets him one-tenth of a point less than he was awarded--recorded in the history books. This would for once and for all remove all hint of impropriety from Hamm's accomplishment.

Now that would be a shining example of sportmanship.

User Journal

Journal Journal: A New Kind of Program 1

Recently I've noticed a new trend in programs. It seems that, when I wasn't looking, programmers stopped writing applications that do anything useful for end users. Even more surprising, these useless programs have become very, very popular.

Does this sound crazy? Well, let me ask you this, then: have you heard of a web browser called Firefox? Have you heard of an Integrated Development Environment (IDE) called Eclipse? Both of these programs, by themselves, are utterly useless...the core of each of these applications is nothing more than a platform for plug-ins, and it is the plug-ins (also called "extensions") that do all of the heavy lifting.

If you are unfamiliar with the philosophy driving these two applications, and what I can only assume is an avalanche of applications that will soon follow this same technique, allow me to explain. A few years ago, an application was simply a collection of code that was assembled into a cohesive unit and deployed (or "installed") that way on a computer. Each part of that application was typically dependent on a large number of other parts of the same application, creating an intricate and tangled web of interdependencies between them. In essence, these "parts" were so interdependent that they really did not exist separately from each other, and even labelling them "parts" in the first place requires a suspension of disbelief. Truthfully, it was more appropriate to call the entire application as a whole a "part".

Because of this web of interdependency in the architecture of an application, all of the customizability of such apps had to be pre-planned from the very beginning. This is typically done in software at the beginning of the development cycle, and it's called "collecting requirements". Software development teams literally sit down and draft a list of requirements the end product must meet in terms of its functionality. In a word processing application, for example, if the user is to be given a choice of whether a ruler is to appear in the main content pane of the editing window, the software that makes that ruler possible must be designed and developed right into the code. In a later version, if other code in the app depends upon that ruler and the ruler must be changed to exhibit new functionality, it's quite possible (and probable) that all of the dependant bits of code will have to be updated as well to leverage this new functionality.

I'm sure the first glimmers of change came with applications like Photoshop. For several years and several versions now, one of the main features well-known to all Photoshop users is its array of filters that can be used to do various things to an image. For instance, there's a filter to sharpen a blurry image. There's a filter to blur a sharp image. There are filters that add dust and speckles to images, filters that add noise, and even filters that make a photo look like a watercolor painting or a stained glass window. At some point, someone on the Photoshop development team realized that the potential for so many different filters existed that there's no way one company could ever implement them all. So, to make their product even more capable, they set about developing a feature in Photoshop not intended for use by end users editing images, but rather for other companies. This feature was fairly simple--it was a small bit of software that makes it possible for a bit of code to request the data making up whatever image is currently active in the application. This bit of code is then free to modify the data (the image) in any way it wants and then hand back the new image to Photoshop. (Of course, it's actually much more complicated than this, as these bits of code--third-party filters--can interact with Photoshop features such as selections, the color chooser, etc.)

Now other companies could write filters for Photoshop that could be purchased by users and installed, and they'd pop up in the menu with all the rest of the default filters that come with Photoshop. The nice thing about this, for these companies, is that whenever they want to release a new version of thier filter, they don't need to worry about updating any dependent code in Photoshop; there's a well-defined separation between the main application and the work the filter does, so they're free to simply release the new version of the filter and the user installs it, replacing the older version and getting access to the new functionality.

I'm sure it didn't even take a full version cycle for the Photoshop development team to realize this as an important benefit they could leverage for their default filters as well. So even though these default filters are a standard part of Photoshop, they exist as plug-ins just as if a third-party company had developed them.

(I feel I must insert a quick aside here to die-hard emacs users. Yes, I know emacs came before Photoshop. Yes, I know emacs is a far better example than Photoshop. I'm trying to make this accessible to Windows users. Besides, I use vi. :p )

Fast forward to Firefox. This browser takes this idea to a whole new level--this app is essentially this concept on steroids. Upon installation of the browser, there is virtually no user-available part of the application that is hard-wired into the application itself...it's all extensions. The "application," to use the term as we would have a few years ago, is simply a framework for browser-type extensions...a framework that allows extensions to be installed, uninstalled, and swap information with each other. Remove all of the standard plug-ins that come as a default part of the application, and that's all you're left with--the application would probably not even be able to display a window on the screen because that functionality is the job of one of its standard extensions. I do not think the old definition of the term application will weather this change in philosophy--I think it's probably going to evolve to mean something like: "a framework for plug-ins related to a common theme and a standard set of those plug-ins that implement a set of basic requirements." In other words, in Firefox's case, the term application must include the standard set of default plug-ins that are supplied with the default application.

The benefit of this approach is apparent from a comparison with Photoshop filters. Virtually any part of Firefox can be replaced by removing an existing extension and supplying a new extension in its place that does everything the old one did and more. For instance, when I installed Firefox it sported a set of context-sensitive flyout menus that pop up when I clicked the right mouse button. I installed an extension that added small icons to these context-sensitive menus. I added another extension that monitors if I request a context-sensitive menu on a highlighted word--if I do, it provides an option for me to look up that sequence of letters in an on-line dictionary. (Which one? I can configure this extension with up to 4 on-line dictionaries of my choosing.) The other day I was a little frustrated with the way Firefox handled downloading files. Within 5 minutes, I found a list of extensions available for it that completely alter the way downloading files is handled, each replete with its own bevy of configuration options.

The great thing about this new approach is that it recognizes the unique needs of end users. It doesn't expect all users to agree on the way a particular thing should be handled. It doesn't even require users to share the same set of expectations of what the application *should* do. Think about the dictionary extension I added to my browser--some developer out there could just as easily write a thesaurus extension, an encyclopedia extension, a rhyming dictionary extension, even extensions specific to disciplines such as astronomy (look up this constellation) or organic chemistry (do a search for scientific papers on the highlighted compound). The possibilities are endless.

Of course, no great idea goes completely unpunished. One of the greatest drawbacks of this philosophy is: the possibilities are endless. Often referred to as "the tyranny of choice," it's very easy for users to quickly become buried beneath an avalanche of extensions that do everything from enabling your browser's status bar to display the surf conditions of your area's nearest beach to a sidebar that constantly probes the port space of a subnet of your choosing. You see, the temptation is great for developers to implement extensions that are only tenuously connected to the basic purpose of the application. Ask yourself: is displaying the current surf conditions functionality that should be present in a web browser? The problem is, some developer out there thinks so...and that extension need not be installed in your particular browser to make it vexing to you. Such an extension need only be available to start causing you problems; when you go to search through the list of available extensions "just to see what's out there," you are immediately faced with the daunting task of reading a listing of extensions longer than the US tax code. In fact, Firefox, a web browser that has not yet reached a stable 1.0 version release, already has quite a formidable list of available extensions that requires the better part of an afternoon to parse. I shudder to think how I'll know which extensions exist that are useful to me by version 5.0.

This approach to programming isn't actually a totally new idea--it's been around for a long time in the computer world, and it's actually quite familiar to users of UNIX-based platforms (such as Linux). It is often said that the UNIX philosophy is to write a tool focused on "doing one thing well." Programmers that grew up on UNIX-based platforms tend to write lots and lots of small tools that do very small jobs, but allow the user to do that job in every conceivable way. Then, to carry out tasks, these small tools are chained together in creative ways (called "pipelines") to get the desired effect. This idea of extensible applications such as Firefox is the same pretty girl, she's just wearing a different dress.

So, this means we can learn a lot about the direction this new approach to designing applications will take by looking at Linux, the UNIX-based operating system for PCs. There are options for pretty much anything a user wants on the Linux platform. Let's say, for instance, that I want a program that allows me to read email in a text-only environment. I can choose from mail, elm, pine, zmail, joe, emacs configured with e-mail client extensions...and the list goes on. How can I, as a user with a finite lifetime and limited patience, even begin to understand the possibilities of each and every kind of application if text-based email clients alone can consume several days' research? The best solution that the Linux community has come up with is: distributions. That is, a bunch of people get together and spend a whole lot of time figuring out what particular mix of tools and applications they prefer, and once they all agree they assemble a Linux installation complete with that set of applications. And this might solve our little problem--except, there's lots of these people, and they can't all agree on just a couple of different distributions. You saw this one coming, I'll bet; now I'm faced with choosing a distro from the pool of available distros: Gentoo, FreeBSD, Linspire, Mandrake, Fedora, SuSe, Debian, Slackware, Knoppix...and this is only a sampling of the most popular of them.

With this level of customizability comes other problems as well. For example, if you go to a friend's house and he also has Firefox installed and customized to his liking, would you recognize it as the same browser you use at home? It depends on the degrees of customization you both use. At what point are you actually using a completely different application? There is definitely something to be said for consistency...you may both be calling your preferred web browser Firefox, but if you share no common ground, are you both using the same web browser? Then there's the issue of security. Who's to say that in all of the thousands and thousands of extensions available for a particular application, there isn't some cracker out there who's generously contributed an extension that provides useful functionality to both you (your default page is automatically set to the cookie recipe of the day) and him (he gets back-door access to your machine, a log of the sites you visit and any usernames/passwords entered at those sites, etc)? How do you know a particular extension you might find useful can be trusted?

It's clear to me that the immediate advantages of this new approach outweigh my fears about future consequences. I do think these potential problems become somewhat less menacing if we make every attempt up front to anticipate them. As long as we do that, creativity and time is on our side.

In the meantime, I have to get cracking on finding a Firefox extension that will allow me to compare the specific gravities of known water table toxins.

User Journal

Journal Journal: What is an Operating System? 1

I was recently involved in a discussion right here on /. in which the various parties involved were at loggerheads because they did not share a common view of what exactly an "operating system" is. I thought I'd dash off a quick response and tell them the proper definition of the term so we could get on to the heart of the matter. Five minutes into rewriting my post, I realized I myself don't really know exactly where OSes end and applications begin these days.

So, I did what a tech geek does...I googled definition of operating system. This created more questions than answers, particularly this link right back here to /. (this site seems to be the Tigris and Euphrates of tech geek discussions). Uh oh, I thought, Looks like I'm gonna have to get theoretical on this problem.

So I defer to the man who literally wrote the book on operating systems, Andrew S. Tanenbaum's seminal work on the subject, appropriately titled Modern Operating Systems, my college text. Below is an extremely abridged account of his definition.

Computer software can be roughly divided into two kinds: the system programs, which manage the operation of the computer itself, and the application programs, which solve problems for their users. The most fundamental of all the system programs is the operating system, which controls all the computer's resources and provides the base upon which the application programs can be written.

Next there is a figure that nicely sums up the whole of a computer system consisting of six layers, the top two of which are divided into three parts each. Starting from the bottom, the layers are: Physical devices, microprogramming, machine language, operating system, compilers/editors/command interpreter, banking system/airline reservation system/adventure game. The bottom three layers are designated as "hardware" (with the proviso that machine language is simply a specification built into the hardware, though manufacturers typically treat it in their manuals as hardware), the next two are "system programs", and the top layer is "application programs". It is interesting to note that though Tanenbaum recognizes the command interpreter, or shell, with the same classification of "system programs" as the OS, he considers it distinct from the OS itself. In other words, a shell is not part of the operating system, which directly contradicts the thoughts of many of the participants in the above-linked /. discussion.

He goes on to say:

Most computer users have had some experience with an operating system, but it is difficult to pin down precisely what an operating system is. Part of the problem is that operating systems perform two basically unrelated functions, and depending on who is doing the talking, you hear mostly about one function or the other. Let us now look at both.

This is followed by two sections entitled The Operating System as an Extended Machine and The Operating System as a Resource Manager. By way of example, the first section talks about how complicated it is to read a file off of a floppy disk if one must deal directly with hardware using machine langauge. Working through assembly port space, the floppy drive must be instructed to start spinning, the appropriate amount of time must pass for the drive to spin up, the disk arm must be moved to the appropriate locations, specific byte patterns representing commands must be issued to retrieve data block by block, etc. Instead of requiring every application developer needing to access data on a floppy, the OS defines a software module that sports a particular interface; all floppy manufacturers must deliver this bit of software with their floppy drives, and that bit of software must implement the required interface for said OS. This bit of software is called a driver. In this way, application programmers need only deal with a software module that does not change from floppy drive to floppy drive and also provides a highly abstract view of the floppy drive that can act on instructions such as get me the file named x.txt .

In the second section, Tanenbaum talks about the OS as a resource manager. The example in this section refers to a situation where several applications are sending documents to the printer simultaneously. If all these apps were allowed to simply deal directly with the printer driver, it would receive the requests all at once and would process them as such, printing one or more characters from document 1, then randomly switching to document 2, then 3, etc. The role of the OS in this scenario is to buffer all requests of a particular shared resource to memory or disk and then send them off in an orderly fashion, one after the other, fully completing each job before starting the next one.

This seems to me to be a very sensible definition of an operating system. Some might be uncomfortable with it, though, because it doesn't include a shell--in fact it specifically excludes it. We are used to thinking of a machine with an operating system as being a usable computer. Without some kind of interface to the OS, the machine is still, as Tanenbaum calls it, "a useless lump of metal." But to these people I would point out that there's really no reason that usability must enter into this definition. The definition of a computer--that is, the hardware making up a computer--clearly does not include an OS and cannot therefore be useful in any meaningful way. Usefulness doesn't enter into that definition, so it seems arbitrary to require that it be a binding condition of an operating system.

So for the purpose of figuring out where an OS begins and ends, consider this question: what can change about a usable computer system that we would not consider an alteration of the operating system? Clearly, if I take out one floppy disk drive and plug in a different floppy disk drive made by a different manufacturer, we would not say that the operating system has undergone a change. So, we are allowed to change hardware and their associated driver software without changing anything about the OS. However, as I pointed out before, all floppy manufacturers must provide a driver that implements a specific software interface that is defined by that particular OS. So we are allowed to vary the implementation from driver to driver, but not the interface it provides to the OS. So driver interface specification is part of the OS, and if those interface requirements of hardware manufacturers change, this can be said to be an OS change. So that's where an OS begins.

Now let's consider where an OS ends. Let's say we have a Linux system with bash installed, and we install tcsh and remove bash. Have we changed the OS on that computer? I must agree with Tanenbaum here as well and say no...we've merely uninstalled one app and installed another. On the other hand, if we change the way the OS manages drivers, schedules threads for execution, or manages memory, we have changed the OS. If we change the software interface to any of the hardware resources that are shared by all system or application programs, we have changed the OS. Swapping one command shell for another certainly does not require that any program on the system change the way it's accessing any hardware resource via the OS, so a shell cannot be considered a part of the OS.

This is all very interesting when we consider a modern "operating system" like Windows 2000. With a standard installation of this OS, we not only have an operating system but a whole lot of system and application programs, such as the NT command shell, Windows Explorer, Internet Explorer, and Notepad, just to name a few. Heck, the GUI itself is nothing more than a complicated kind of visual command shell on steroids, so even that is not part of the OS itself. (This is obvious in Linux, with a wide choice of GUIs such as KDE, Gnome, Enlightenment, WindowMaker, owl, etc.) Of course, because it's always been historically delivered together as one big bundle since Windows 95 (and some would argue, even Windows 3.1), consumers have been trained to think of it as part of the OS. But in truth, the only thing stopping other companies from writing alternative GUIs for Windows is that Microsoft will not publish the entire set of software interfaces that sit atop the OS proper. It may even be the case that there is no strict software separation between the GUI environment layer and the OS proper. Still, if MS updates the way windows are drawn on screen, I would say that nothing about the OS has changed, same as if they changed the way the NT command shell works.

There is a reason I'm taking such great pains to clearly delineate the boundaries between programs and the operating system. With this understanding, it is easy to see the court case against Microsoft's bundling of Internet Explorer with Windows in a new light. If it is acceptable that Windows should come with the NT command shell, a GUI environment, and Notepad, why not Office and Internet Explorer? They are simply following a long standing tradition of providing various applications with the OS. And if this is allowed, there's no legal reason to earmark one application as being exempt from prosecution while another is not.

In my estimation, a proper understanding of what an operating system is allows one to draw the distinction between OS and application very clearly. This approach would make such rulings against bundling enforcable, which I think is very important because instituting laws that cannot be enforced breeds contempt for all law.

Having said that, I do think that any company ought to be allowed to sell its products in any way it chooses. Even under well-regulated capitalism, I don't see a problem with allowing MS to bundle all of its apps with their OS, selling the whole kit'n'caboodle as a single package if that's what they want to do. Again, making distinctions between apps like Word and Notepad seem arbitrary to me and ultimately hurtful to commerce. If MS will not be required to split the OS from all other code, then they should be allowed to bundle any and all apps they like from a legal standpoint.

User Journal

Journal Journal: Linux: Mandrake 10.0 Experience 5

Hey all, thought I'd share with you my recent experience installing the Mandrake Linux distro just last week. I'm sad to say that it seems Linux, at least Mandrake, is not quite ready for prime time yet.

Let me begin by heading off at the pass those who will say, "Ah ha! Well, there's your problem, you shouldn't have gone with Mandrake. You should've gone with a good distro like Debian/Gentoo/Linspire/SuSe/etc." I reject the validity of this point because, as someone who likes the idea of Linux but mainly uses Windows these days, I just don't have the time to investigate in careful detail the differences between all of these. I did about a half hour of cursory research, chose Mandrake, and moved on. If there is a up-to-date page that lists all of the distros and compares/contrasts them, I was not able to find it, meaning that anyone else wishing to adopt the Linux platform and only wanting to do only about a half hour worth of research will not, either.

Ok, so, I d/l'd the images and burned them onto CDs. Booted off the CD and in about a half an hour (not including d/l and burn time) had Mandrake up and running. So far so good. This machine happened to be on a Windows domain, so next comes Samba.

I do not claim to be a Samba expert, but then again, all I wanted to do was get the machine on the domain and get filesharing enabled both ways. Perhaps I'm not the most savvy Linux doc reader and HOWTO tracker, but I'm not a disaster either, and I was very disappointed with how long it took me to figure out how to get this machine on the domain. I was dismayed when I couldn't find SWAT installed and discovered that, despite specifying that I wanted Samba during the installation process, SWAT was left on the CDs. Ok, so I put SWAT on and run it up...and found absolutely no indication of how to get the machine on the domain through this supposed "web adiministration tool".

Finally, I found the magic incantation to join a Windows domain on a discussion board, not in Samba docs or the SWAT help files. (Who wrote those help files, by the way? Clearly someone who expects that anyone using SWAT is dedicating a large part of their career to learning and using Samba.) Then after about another hour's research, I discovered that it seems to actually see files on the Windows boxes, I must run an app called smb4k (uh huh) and mount shares on the Windows boxes.

Now I had a crazy idea. If this is the application to use for getting access to the Windows boxes, why not try to expose some Linux shares to the Windows boxes using it? Alas, it was not to be. Apparently it did not occur to anyone working on Samba that one might want a single application from which one could expose files in both directions...this must be that silly Windows naivete on my part that I keep hearing about. Finally, I cracked open the /etc/samba/smb.conf file and vi my way to bidirectional file access.

I realize it's not fair to blame Mandrake for this. After all, this is Samba, not Linux, and I understand the difference. So far, the only complaint I have with Mandrake is that I asked the installation to give me Samba and it didn't also automatically install SWAT. So now it's time to install Firefox. I d/l it, run the installer...everything goes smoothly and the browser pops up on my desktop. I've been running Firefox on my Windows box for about two months, so the first thing I want to do is configure it. I click Tools -> Options...Options? Where are you Options? After a bit of poking around I discover it's under Edit -> Preferences, like Netscape, on Linux, but Tools -> Options, like IE, on Windows. Greeeeat. So much for multi-platform...apparently Firefox is a different browser on Windows than Linux...now I'm starting to wonder what else I can expect will be different.

Ok, so it's on now, and the installer's popped up a browser window, I've configured it, and I'm running. I close it, and a little later decide to do some more web browsing. Over to the menu app launcher...Firefox nowhere to be found. Ok, I think, Mandrake must not work with every installer to add apps to the menu. I'll just add it myself. After a bit of poking around and 10 minutes of reading menudrake docs, I get the menu item in...but no Firefox icon. Just a generic application icon next to the launcher button. 30 minutes more poking around, no luck, no success. I install Limewire and have a similar experience, I have to manually add it to the menu and then no icon. Now I have two apps with the generic system app icons next to them. I look forward into the future and try to imagine a menu of app launcher buttons, all with generic system app icons next to them. My stomach turns.

Ok, enough time on that. Time to do my web browsing. I click the generic icon to launch Firefox and see the Firefox entry added to the taskbar. The wait symbol spins for about 10 seconds, and then it disappears. No explanation, no error message, no nothing telling me why it didn't load properly. I run it again...same thing. This time, I pop up a console and try. Ah ha! Now I see an error message at the prompt saying that Firefox terminated because some dependent library is not installed or configured properly. But I just had it up and running!

No problem, I reinstall it. The browser pops up at the end of the install as before and I browse. Close it, reopen, and...nothing. Open a console, same thing...some kind of library problem.

So, am I impressed? No. I do so want Linux to work well because I hate the evil empire, but the only friends I have that run Linux successfully as their main desktop at home either live and breathe the platform, keep abreast of every new development for it, and don't mind spending hours figuring out how to add skins to xmms or configure the proper icons in the app launcher menu, or they've simply gotten it to a workable point and become frozen in time, afraid to make any major changes to the way their doing things for fear of opening a Pandora's box.

It's been about three years since I did a brand new install of any distro, and I heard that the latest wave of releases solved all of these problems. I'll admit that I spent less time mired in doc for this go-round than I have in the past, but I am still left with a bad taste in my mouth. It's sad, really, because I do so want Linux to work for me.

User Journal

Journal Journal: Pairing Wine with J2EE

Over the last few years, I've drunk quite a bit of wine and written quite a bit of J2EE code for enterprise applications. At one point over this time period, I was engaging in each of these activities frequently enough that the thought occurred to me that there is no natural separation dictated by sense and science that they should remian divided so.

The idea began simply enough, but The Question soon arose. There are many aspects to wine, so much so that pairing a wine with the proper food is a subject unto itself. J2EE programming is no less complex. The Question: how does one properly pair wine with J2EE such that the experience, as in wine with food, will be tolerable, much less a divine engagement of the senses and faculties?

Anyone who understands the Herculean nature of this epicurean undertaking will forgive the roughness and incompleteness of the sketch that follows.

  • Design. Dry, grassy, mild, and delicate Sauvignon Blanc, or Brut blanc de blanc sparkling wine. Like design, these are very finicky wines that demand a high degree of compatibility with the Design. The Design must not be heavy-handed and overshadow the nuance of the wine, yet it must provide the structure to carry the harmonic flavor notes supplied by the wine.
  • JSP/Servlet.. Reisling, fruity Merlot, pleasantly spicy Gewurtztramer, flinty Vouvray, or off-dry sparkling wine is best with JSP/Servlet. Just as any good UI, this wine must be approachable and friendly even to the uninitiated palate. Enjoyment of such a wine should almost direct itself, requiring little knowledge or experience from the user. Of course, this is not the time to bring out flabby wines just because they are uncomplicated--the appropriate wine for this pairing must carry off easy appreciation while maintaining good structure and balance that is both visually and sensually pleasing.
  • EJB/JMS/JCA/JDBC. Only a full-bodied red can stand up to the rich, meaty qualities of EJB/JMS/JCA/JDBC. Up to this point, the pairings are meant to titillate and invite the user to continue to the next course. This pairing is meant to sate the user's hunger. Whites and light-bodied reds need not apply. This is where all of the "real work" happens on the palate and in the application, so only a zesty Zinfandel, intense Cabernet Sauvignon, or inky Syrah will do. Many are at first taken by surprise that EJB/JMS/JCA/JDBC pairs well with so many different varietals...usually choices are restricted to Cabernet and Cabernet alone. To these folks I would ask them to consider the JDBC element of EJB/JMS/JCA/JDBC with a bit more weight. The JDBC aspect allows a certain degree of varietal-independence along with the vendor-independence for which it is better known.
  • Database design/Legacy programming. Only an Old World wine will do here, such as Chateau Neuf-du-Pape, a Chateau Margeaux, or an aged Petrus or Mouton-Rothschild. Sure, such wines are pricey to own, much like Database design/Legacy programming itself; a wine that pairs in this category will shun the newer, more radical approaches to winemaking and stick with the tried, tested, and true standby methods of a long-forgotten past, when the ritual of the winemaking process was as respected as the product itself. The less senior who partake of this pairing would scoff that the experience is "outmoded" or "arcane", the true aficionado of Database design/Legacy programming can appreciate the layers of complexity upon layers of complexity present within both elements of the pairing. While many are put off even by the sight of such wines, tinged with brown and perhaps looking well beyond their years, a notable few know how to fully appreciate the terroir that is the product of such old vines.
  • Testing. The attitude of those who are likely to partake in Testing in the first place can be summed up as follows: they care little for nuance and delicacy, and instead prefer to simply get the job done; testing is a necessary evil, and the matching wine is viewed more as sustenance than a beverage to be enjoyed and pontificated upon. As such, the proper wine that pairs with testing could be any box wine, cooking wine, or bottled wine such as Two Buck Chuck. Those that frequently partake of Testing engage the task with the singular focus of finishing, and have little regard for structure and balance. Any wine offering primary notes of sugar, high fructose corn syrup, or saccharine will do nicely here--rough secondary flavors need not be considered, for the wine will be quaffed along with the Testing before they have a chance to even develop on the palate.

Enjoy!

User Journal

Journal Journal: Ironic, Hairy Datums and Biweekly Virii-Ridden Octopusses 12

Ok people, most everyone around these parts graduated from college and those that didn't are scrappy in the brains department. It's time we stopped misusing common words. Note that I said "it is time", as opposed to "its time", as in: As far as proper usage of the language, its time has come.

You know what I'm talking about. You're sitting at the computer, less than one second of typing time away from Merriam-Webster.com or Dictionary.com, and you misuse a word despite knowing you only have a vague grasp of what it means.

I'm not an English Nazi. I'm against English Nazism (I'm against the German kind too). If you're correcting someone's improper usage even though that person has expressed themselves clearly and their meaning properly applies to the situation, you are detracting from the conversation. If someone misuses a word like sedulously, ok...I might gently point it out, but I'm not going to hold it against them. At least they're reaching to expand the ol' vocabulary.

On the other hand, sometimes misuse makes the conversation vague or opaque. Sometimes the word is just so common that misspelling or misusing it grates on the listener to the extent that the point is lost...your statement becomes about how ignorant you are instead the topic of conversation. I'm talking about people that have graduated the fifth grade and still swap loose (tighten that nut) for lose (better luck next time). There definitely not using they're dictionaries, their. Oops, that should have been they're/their/there (note that middle one is t-h-E-I-r).

What about when English Nazis go wrong? Who corrects them? Well, I'm about to. (Are you going to complain that I ended that sentence with a preposition, English Nazi? Before you do, make sure every case can be accounted for. I think you know what I'm talking about. Clearly I'm making you mad, so stop sitting around. For or against me on this, you should come out. If you think you're right, it's time to act up. Ah well, if you didn't catch it before when I did the exact same thing at the top of the second paragraph, why should I pay attention to you now?)

Read the following example passage.

Good data is elusive on the genetics of hair color. One of the problems with studying the topic of hair color is that individual genetic studies usually contribute very little data to the understanding of hair color inheritance. So it is not yet possible to grasp why a parent that has blond hair and a parent that has brown hair can produce a child that has black hair because hair is associated with largely unstudied sections of the human genome. Because genetic studies typically have a primary focus elsewhere, information on this issue of hair color is sparse. Common statistical techniques requiring a large sample space cannot be applied to the data; there is simply too little available to make a strong case.

Ok, is there anything wrong with the above passage? Did my use of data as a singular noun bother you? If it did, you, sir, are an English Nazi, and worse, you're wrong in correcting me to boot. (I might have chosen to state this differently: "Your wrong is in correcting me to boot.")

Data, as used above, is indeed singular. It's true that the etymology of the word comes from the plural form of the Latin word datum, and that it maintains this proper usage if you are referring to several data points and they must maintain their identity as individual entities in the context of a particular sentence. I'll bet you rarely use it this way without including a definitive measure word, though, because even you think it sounds clumsy. Moving on, let's rewrite the above passage under the guidelines of English Nazism, no aggregative singular forms allowed:

Good data are elusive on the genetics of hair color. One of the problems with studying the topic of hair color is that individual genetic studies usually contribute very few data to the understanding of hair color inheritance. So it is not yet possible to grasp why a parent that has blond hair and a parent that has brown hair can produce a child that has black hair because hair are associated with largely unstudied sections of the human genome. Because genetic studies typically have a primary focus elsewhere, information on this issue of hair color are sparse. Common statistical techniques requiring a large sample space cannot be applied to the data; there are simply too few available to make a strong case.

"Wait," you say, "I didn't mean you should apply the same rules to hair and information that I'm insisting upon for data!" Well, why not? English is already complicated enough. I think I have the right to ask you to be consistent if you're going to change what is currently recognized as proper usage.

But, ok, let's say I go along with you on this one. Let's say that data is somehow different from hair and information (despite your utter lack of support for this bizarre idea). Let's look at the last sentence of the passage and do a little compare/contrast. I would say this is correct:

Common statistical techniques requiring a large sample space cannot be applied to the data; there is simply too little available to make a strong case.

...and you say it's:

Common statistical techniques requiring a large sample space cannot be applied to the data; there are simply too few available to make a strong case.

It seems your insistence on incorrect grammar has actually changed the meaning of the sentence, or at the very least, made it vague. Too few available what? Is there not enough data, or are there too few statistical techniques available? In the original sentence it is obvious that "too little" refers to too little data because "too little" cannot modify statistical techniques or any other plural, for that matter. If you're going to insist that data is plural, then you can no longer say things like too little data, very little data, or too much data. This would be like using too little to modify any other plural, as in too little screws, which is definitely grammatically incorrect. If you do say it, people are likely to misinterpret your meaning as too-little screws, in other words, each individual screw is too little to do the job: Why won't these boards stay together? Too-little screws. The fix is to use the same number of bigger screws, not more of the too-little ones...that won't help.

The last vestige of the English Nazi's argument clings to the idea that, if we accept data as both a singular and a plural form, how can we possibly know which is the proper form to use? Well, it depends on context, just like you can be both singular and plural. If you're speaking about one datum (an individual fact or proposition) here and one datum there, and they must retain their identity as individual entities, then you may use data in its plural form: This datum and that datum conflict; these two data are at odds. (Note that without the "two" you're back in vague-land. Without that "two" the listener is likely to wonder if you're making reference to all of the data, or still talking about those two points.) In every other case, if you're wondering how to decide, use a measure word instead to make your meaning explicit.

What's a measure word? It is a word, often implied instead of explicitly stated, that organizes a number of entities into a grouping. Consider this statement: My hair is blond. The implied measure word depends on the context; usually, I'm talking about my head of blond hair (that's why it would be as improper to say, "My hair are blond," as "My head of hair are blond."). Similarly, when I speak about data it is most often in reference to a set of data. If saying "the data is..." makes you uncomfortable, go ahead and imagine "the data set is..." If you're talking about two sets of data which must maintain their separateness, go ahead and explicitly state the measure word and talk about "sets of data" so as to avoid confusion for yourself and your listener.

What I'm really getting at here is the usage of data as a collective noun. If the data under discussion is being referred to as a collective whole, then it's singular and can take on all the properties of a singular word. If the individual members of the data set are actually what's being referred to, then it can be used as the plural form. Examples follow.

  • The jury is arguing. What argument is the jury, as a whole, making?
  • The jury are arguing. I hate it when they argue amongst themselves. What point in particular is causing the problem?
  • This family is staking its claim. If they wanted to, the family could stake their claims. But they've decided to stick together as a unit, and therefore it is, as a group, only staking one claim. And that's good...I like to see families stick together.

Data is a particularly frustrating example of English Nazism; that's not to say there aren't valid complaints about the way some people pluralize. There is no excuse for speaking about more than one virus as viri or worse, virii. It's viruses. On the other hand, just because it's proper to say octopi doesn't mean octopuses is wrong, just don't spell it octopusses. And when people refer to a computer as a box, as in I run a Linux box and a Windows box..., they should not conclude the thought: ...so I have, in toto, two boxen. Then again, whenever I see "boxen" it's obviously intentional and hilarious (alluding to the plural of ox), so in that case it's fine. On the other hand, the absolutely proper usage of "in toto" where in total would have sufficed is as infuriating to me as I'm sure it is to you, unless the person is making a joke.

If that example does not engender English Nazism, here's one that does. It's common to refer to an abstract person as in the following sentence: Before a person speaks, he should first think. The politically correct will try to correct this; he should be written instead as he or she. I reject this...in reference to the abstraction of a person, I see no problem with the assumption that person is male. In fact, I would go so far as to argue that the he in this sentence does not refer to maleness at all. It is clear to even the simplest mind that the person referred to could be either male or female and the sentence still holds true; to assume the writer is actually referring only to males is to intentionally misread it.

Now we've all heard the story of the boy driving with his father when they have a big accident. Both are rushed to the emergency room whereupon they are whisked away into surgery. The surgeon, upon seeing the boy's face, exclaims: "We need to get another doctor in here. I cannot operate on my own son." If you haven't heard this little parable, no, the boy does not have foster parents, he was not adopted, he was not driving with a priest, and he doesn't have two fathers. The surgeon is his mother.

This story, though, does not illustrate that we are a sexist society that can only remedy our situation by applying the clumsy he or she construct wherever we would normally use he. Instead, it only illustrates that we are minimally observant, and that, for whatever reason, most surgeons are male and we happened to notice...therefore unless explicitly stated otherwise we generally tend to assume surgeons are male (and no, the reason most surgeons happen to be male and whether that in and of itself is due to sexism in our society, while a potentially rich and perfectly valid topic of discussion, is not germaine to this discussion on semantics). I would point out that feminists are just as likely to be taken in by this story as even the most chauvinistic of men. Clearly, if one's sexist tendencies were the sole reason one might find this story confusing, only the sexist would be confused by it. Imagine a world in which there exists such a simple litmus test for sexism, racism, or whatever other -ism you can think of.

So why should I listen to you, PC Nazi? You didn't seem to mind when I referred to you as "sir" in the paragraph above between the two passages concerning usage of the word data. Besides, languages have a long history of noticing gender. How would you apply your Nazism outside of English? Would you argue that Latin, Spanish, and Italian should do away with gender-based noun declension? You sad, strange, silly little man.

That being said, in my mind the jury is still out on the singular use of they. Occasionally it seems right to use they in reference to the abstraction of a person. It seems to emphasize the abstractness of the referent...it seems to drive home the point that any one of us could fit the bill and the message still holds true. If you strongly disagree, I wonder if you disagreed as strongly when you ran through the third paragraph of this very essay. I'll bet most of you will have to go back and reread that paragraph to see what it is you so readily accepted on the first read-through.

Besides obviously proper and improper usage, there are words that don't really lend themselves to this kind of analysis. For example, consider whether the following is improper in any way: I quickly scanned the police report to see why the deputy had been out in the field for a full two hours. It used to be that this would have been improper use of the word scan, which meant "to examine closely" (scan still retains this definition). What happened to this word, which now also has a conflicting definition, "to look over or leaf through hastily"? I'm betting that technology is to blame for that second conflicting definition. When the first grocery store checkout scanners came out, the technologists probably titled them scanners because they closely examine UPC symbols--that they do so rapidly is nice, but cannot be the point of the original title or else they would have been called skimmers. But to a customer, the scanner was a jump forward not because it was marginally more accurate than a checkout clerk, but rather because it was vastly faster. So the association was set in people's minds, and who's to say whether this kind of evolution is not allowed? I remember throughout my youth being corrected on this by librarians and English teachers, but as it happens I was correct and they were wrong; I was just ahead of my time in recognizing this as a necessary evolution of the language.

Then there is usage that has become proper that I cannot bring myself to use. Consider the prefix bi-. Does this mean "two" or "half"? Well, what does bisect mean? It means to divide into two parts, or cut in half. So this doesn't help us nail it down because it's ambiguous as to whether the bi- signifies two-ness or half-ness. What about bisexual or bicycle? I would argue that, in these two cases, it is clearly two-ness being expressed...half-ness just doesn't make sense in the case of a bicycle, since unicycles and tricycles exist and the comparison is clear, and I don't even want to know what your perception of bisexuality is if an interpretation based on half-ness makes sense to you. This leads me to think, for the sake of consistency, I should consider bi- prefixes to refer to two-ness. This approach does not exclude any case which might also be construed as half-ness, for all such cases can just as validly be interpreted as instances of two-ness as in the case of bisect. The reverse is not true.

Ok, so we're agreed, then. Words prefixed with bi- imply two-ness, and rely on the stem of the word to define the thing that has taken on two-ness. Bisect, for example, means to section, or divide, into two parts. The fact that a bisected object is associated with halving, as opposed to doubling, has to do with the fact that the object is being sect-ed, and nothing to do with being bi-ed.

What about biennial, then? What should this mean, twice per year or every two years? Well, the stem -ennial means "year", and bi- means "two", so I arrive at an expected definition of "occurring every two years". Bingo--that's exactly what it means.

What about biannual? The same argument applies, right? Wrong! Well, not wrong, but not necessarily right. This word can mean either "twice per year" or "every two years" (likewise with biweekly and bimonthly). Arrrgh! How fickle! But, I am forced to admit that there is simply no other available way we could, in a single word, refer to something that happens "twice per" some period of time, so I'll grudgingly let it go.

Except...there is such a word available to us, and it doesn't have an alternative, conflicting definition! Furthermore, it has no connotation of two-ness associated with it, so the teeming masses are not susceptible to misunderstanding it. I'm talking about semiannual, semimonthly, and semiweekly. Everyone knows the prefix semi- implies half-ness, and thus endeth the confusion. The problem, of course, is that the imposter definitions for the bi-words have already snuck in under the radar! Well I, for one, refuse to acquiesce, and I will continue using biweekly to mean every other week and semiweekly to mean twice per week. If you don't understand what I mean when I say biweekly, tough. Everyone will suffer the ambiguity at every turn until they see fit to dispense with the bad definition as I have. (There is hope, it seems; my preferred usage is predominant.)

So far I have discussed mainly semantics. Of greater importance are situations when a person conveys a completely different idea than what is intended, a much more egregious misuse of the language. I invoke the poster child of such misuse...yes, I'm talking about irony, and not the kind that's like brassy or goldy except with iron.

The principle definition of irony is: "the use of words to express something different from and often opposite to their literal meaning." I would argue that this definition allows an interpretation to eke through that does not capture the spirit of irony; merely expressing a meaning that is "different from" the literal meaning, to me, is more sarcasm, or juxtaposition, or something, but not irony.

What's the difference between, say, sarcasm and irony? Well, sarcasm only requires difference between what is expressed and what is literally meant, and it must include the intent to ridicule or otherwise wound. Irony, in my mind, requires more than simple difference between what is expressed and the literal meaning; the difference must be one of opposition. Additionally, irony may or may not be used to ridicule or wound...sometimes it's just used for humorous intent. An example: a sign very near a laser source that reads, "Do not look into laser with remaining eye." This could only weakly be interpreted as ridiculing anyone...the focus is not on cutting down the poor soul who just lost an eye. It's funny because of the idiocy of the person who posted a warning sign, which ostensibly exists to prevent injury, in a place that is so likely to hurt someone that the sign itself acknowledges it.

The word irony is so misused, I fear the concept may require a college education to properly grasp. I award people some points for effort when they misuse the term but get close, where perhaps sarcastic or sardonic would be better choices. I do not have such a forgiving attitude when the person clearly has no idea what they're talking about. I have known people that use ironic when they really mean funny. I challenged one such usage where the person said, "Ha ha! That guy got hit in the nuts. That's so ironic!" Upon further questioning, this person explained, "No, it is ironic because the reason he got hit in the nuts is that he was trying not to...if he'd just stayed where he was, he would've been fine."

Nice try, bucko. That's just bad luck, or perhaps ineptness, but not irony. To clear the bar of irony requires conscious, carefully directed thought. If the effect of all that careful consideration is the opposite of what's intended, that's irony. I would hardly call an automatic response of the nervous system (that is, jumping to a location one considers out of the way of an oncoming softball) careful conscious planning. But how can I hold him responsible when Alanis Morrissette makes a million dollars off a song that is ostensibly about several ironic situations...except it gets the definition completely wrong, and none of the situations described in the song are actually ironic. Her careful plan to write a song about irony resulted in a song that is about everything but irony. One might expect the public to chide such a thing. Instead fans welcomed it based on the same misunderstanding of the term. Maybe she's satirizing her own fans' ignorance. More likely, she's totally unaware that her attempt to raise irony-awareness is deeply flawed due to her own ignorance of the concept. Now that's irony.

Should you use a serial comma when writing a list: bags, bushels, and baskets vs. bags, bushels and baskets? Yes, I say you should. I know everyone will say it's perfectly acceptable either way, but if you typically don't use it, you might find yourself in the following situation. You've written a treatise throughout which are several lists. You come to one list in particular in which you would like to imply certain groupings of two: bushels and baskets, packages and boxes, and bags. Uh oh...now what? If you leave out that serial comma, as you have been doing all along, your reader will think you meant to group bags with packages and boxes. If you leave it as it appears above, then the reader will likely be confused and reinterpret all of your previous lists, lacking the serial comma, as an implied grouping. Oh, what to do, what to do? You're screwed...you should have taken my advice. Besides, what if I was against the serial comma and I decided to dedicate this essay to "my parents, Ayn Rand and God"?

What about good/well? Well, I don't know about you, but I feel good and I'm looking well! That is to say, my fingers work well enough to feel things, and my eyes work well enough to see. In everyday parlance, though, I have no problem with saying, "I feel good," even though technically it ought to be, "I feel well." On the other hand, people who say, "You're looking well!" actually mean to say, "You're looking good!" They are being pretentious by calling attention to the fact that they're speaking "proper English"...except they're showing off improper English. Err on the side of understandability. Always remember: sedulously eschew munificent prolixity, obfuscatory redundancy, and unmitigated hyperverbosity. I'm likely to forgive you if you go wrong in one direction...I don't look so kindly on pompous asses who don't know that about which they talk.

Time for some rapid fire...buckle up.

If you don't know when to use than vs. then, then you're dumber than a squash and I can't help you.

Here's one I don't really care about, but it's probably worth something to someone. I.e. means "in other words," e.g. means "for example". It's good to know the difference, e.g.: you probably won't get this, i.e., you're just too dumb.

It's chAmping at the bit, not chOmping at the bit. Also, it's my old stAmping grounds, not my old stOmping grounds.

A thing cannot be very unique. It's either different from everything else, or it isn't. Uniqueness does not vary depending on whether it's very different or just a little bit different.

I might say there are a myriad of examples of improper usage in this essay, but why would I want to when I could just as easily, and more simply, state that there are myriad examples? (Certainly this essay isn't long enough to provide myriads of examples.) In my opinion, myriad is an adjective that is also a noun, it is not a noun that can also be used as an adjective. The noun form should mostly be left to the skilled wordsmiths, such as when Samuel Taylor Coleridge writes "Myriad myriads of lives." Most of the time, the of is just dead weight, so listen to Occam and toss it overboard.

There. I know I feel a whole lot better, don't you?

User Journal

Journal Journal: Religion is Irrational 2

Hold up, religious zealot! Don't get all offended. It's true...religion is irrational. You're the one linking the word "irrational" with a negative value judgment...I'm just speaking the truth.

What about the big three? Can anyone make a logical argument that Christianity, Islam, or Judaism are rational pursuits? (I don't feel the need to address Hinduism, which relies on a vast mythology that is widely regarded from within the religion itself as allegory, nor Buddhism, which openly and specifically addresses the concept of rationality itself as being an obstacle to enlightenment.) This is simple to answer, as it happens. Do any one of these not require the believer to make a "leap of faith"?

As in the mathematical discipline of the same name, logical reasoning can lead to any statement at all, true or untrue, if even the smallest inconsistency is allowed to creep in. Here, look:

x = 1 (line 1)
x^2 = 1 (2)
x^2 - 1 = 0 (3)
(x - 1)*(x + 1) = 0 (4)
x + 1 = 0/(x - 1) (5)
x + 1 = 0 (6)
x = -1 (7)

At the beginning of this proof, I set x equal to 1. Following a sequence of perfectly valid mathemetical operations, x comes to equal -1. Therefore, 1=-1. Right?

Of course not. A small logical inconsistency snuck in there, resulting in the logical error. And if you didn't catch it, that means you went along with it because it seemed reasonable...you willingly made a leap of faith in the correctness of the errant step that resulted in a small but unfathomably significant flaw. If that small flaw is allowed to remain as truth in your system of reasoning based on the above proof and your leap of faith, though, I can build an entire mathematical framework based upon it that can result in whatever statement I like, all without having to introduce even one more error.

So, ok, if you don't know yet I'll tell you; where'd you make your leap? Take a closer look at line 5. See the right side of that equation: 0/(x-1)? This is the problem...see, I've already defined at the beginning that x=1. So if I evaluate line 5 of the "proof", it becomes obvious what's wrong: 0/(x-1)=0/(1-1)=0/0. You can't divide by zero.

So, you see, a tiny, tiny bit of irrationality injected into a whole lot of rationality can result in a situation in which I can convince most people that anything of my choosing is true, provided they're willing to accept that 1=-1 based on my proof above. And accept it they must, absurd as it is, because it's mathematically "proven".

I'm not saying that irrationality is necessarily bad. In fact, it's quite likely that in many cases irrationality serves our purposes. It's probably true that we have, over millions of years, evolved many irrational behaviors, instincts, and beliefs because nature selects for survival, not rationality (all you have to do is look at a duck-billed platypus to figure that out).

Of course, this does not mean that all irrationality is good, either. Now that you, along with every other religious person I've ever spoken to on this topic including priests, deacons, rabbis, and imams agree that religion depends upon a "leap of faith", a step of the mind "beyond reasoning", I'd like to solicit a bit of feedback.

Can you identify exactly what philosophical axiom you hold as a result of your leap of faith? What is the simplest, most fundamental statement you hold as true that serves as the basis for the framework of your religious belief system? What is the "line 5" in your religious "proof"?

I'd like to reiterate that I pose this question not as a snarky passive-aggressive attack on religious belief, but rather as a philosophical survey. I realize that everyone, religious or not, if they care to trace the lineage of logic of their worldview fully enough, must hold a set of axiomatic beliefs that rest upon a firm bedrock of faith. I believe my senses generally don't lie to me. Descartes believed he existed because he was conscious, sentient, and could direct his thought processes (I can direct my own thought processes to a degree in my nonexistent body that is present only in my mind during dreams...for me his "I think therefore I am" belief is not a good axiomatic belief to hold).

So I want to know: what is the most fundamental statement you can make that rests upon no reasoning other than sheer faith that specifically allows for religion to enter your world?

User Journal

Journal Journal: Hot Button Issue: Did Bush Lie? 7

Bush did not lie about weapons of mass destruction in Iraq. This bears repeating; Bush did not lie about WMDS in Iraq.

Anyone who doesn't know this by now does not want to know it. While there was disagreement about how to handle Iraq, everyone was in agreement that Iraq either had or was dangerously close to having WMDs. There is no doubt that Iraq did indeed have, and use, WMDs in the past. There is no doubt that Iraq had chemical and biological WMDs when inspectors were banned from the country, and these stockpiles were unaccounted for as of the beginning of the war. And there is no doubt that Iraq had several cozy relationships with terrorist groups that want to hit the US.

Bush acted on information believed by Democrats and Republicans, President Clinton, Tony Blair, Vladimir Putin, and the United Nations. It's true he was the only one to act. It's not true that he was the only one that believed there were WMDs. That there was a serious doubt on this issue that ran across such a broad spectrum as the aforementioned list of believers is more than enough for me.

It is true that there is something fishy about the oil-for-food money situation in France. It is true that key UN people were possibly benefitting from this soon-to-blow scandal. It is true that Russia and France stood to benefit much by preventing the overthrow of Hussein's regime in the way of oil due them.

Bush did not lie about the connection between Iraq and Al Qaeda. Read it again: Bush did not lie about an Iraq-Al Qaeda link. The Bush Administration did not say that they collaborated specifically on the 9/11 attack. That Al Qaeda carried out the 9/11 attack and that Iraq and Al Qaeda were generally linked over the previous 10 years, though, is more than enough for me. That any support Iraq gave Al Qaeda indirectly supported the 9/11 attacks by freeing up more of their resources and time is more than enough for me. That Iraq supported any terrorism at all is more than enough for me.

Doesn't this mean we have to attack North Korea? China? Iran? Saudi Arabia?

No, it doesn't. That's the great thing about being the good guy. We can pick off the bad guys in whatever order we choose, at our own pace, and using any of the methods available to us, whether it be mild protestations or economic sanctions or all-out war. We can legitimately do what we want because we're the good guys.

Since the television went back to normal scheduling after 9/11, I've grown tired of hearing, "Dissent is patriotic! Dissent is patriotic!" You know what else is patriotic? Patriotism.

If you were one of the people that clapped after watching Fahrenheit 9/11 in the theater, don't respond to this entry. This post isn't for you; it's only for people who can deal with facts and change their minds.

User Journal

Journal Journal: Hot Button Issue: Abortion 3

I have hit upon the solution to the abortion debate.

Sounds crazy, doesn't it? One of the most controversial and viciously debated topics of our day, and I have solved it. Pretty heady moment, I must admit...give me a second to take it all in.

Walk with me down the path of enlightenment on this one. You, the anti-choice adherent, must overcome one and only one obstacle to sell me on your argument. You must leave religion out of it. You must not argue based upon terms you cannot define or personal beliefs with only religious support. If you believe abortion is murder, that means you believe that a fetus is a life. If you believe a fetus is a life, then you believe you know when life begins. If you believe you know when life begins, then I submit to you that that belief is based upon your religion and your belief in some kind of god or religious text. In other words, you are trying to form public policy based upon your god.

The problem with this approach is that America does not have a theocratic government. It's right in our Constitution...you can't form laws based upon what your god tells you is right and wrong. Because if you can do that, then so can I, and I just happen to be a devil worshipper, who believes that life begins at 100. See the problem?

Don't be so smug, you anti-life abortion monger. I see you over there snickering away in the corner watching the anti-choice crowd struggle to convince me that "Science" generally agrees with them (which it doesn't--don't insult me, anti-choice people...I know science, and its definition of "human life" includes tumors, cancer, warts, boils, and pimples, all of which you would hack out in a second and let die). You anti-life believer, you want to legislate your belief that the fetus is not alive. In doing so, you, my friend, have the exact same problem as the anti-choicer. Simply: you have no friggin' idea when life begins either.

The truth is, we simply don't know at what moment life begins. The awful truth is, it probably doesn't begin at a moment. Life most things in the universe, it is most probably a spectrum over time. A fetus edges toward babyhood, becoming a truly sentient, conscious being long after having entered the world, and starting the inexorable slide towards harsh awakedness even before conception. Oh yes, the spectrum is wide and all-encompassing, and will flout your best efforts to find that one point at which the mythical light switch is thrown. The truth is, it's a continuous motion, not the flick of a switch, toward life. Too bad for you.

So what are a rational people to do? Well, let me ask you a question. I present you with an iron box that is locked and tell you that the box either has a ball in it, or it does not (I wanted an example that wouldn't require a Ph.D. to grasp, see?). So, leaving you in that state of knowledge, I then ask you to form some kind of belief about the state of the inside of the box, with respect to whether it includes a ball or not. Would any rational person commit to one side or the other in this thought experiment? Do you really think that we could call a person intelligent who decided to glom onto a fervent belief that there's a ball in there? Or the mope who decides that he's going to live his life based upon the idea that the box is empty?

Both are begging to be turned into fools. The rational mind simply admits that the current information is inadequate, and it's not sensible to form an opinion one way or the other. Therein lies the rub of my argument, friends. We do not know how to ethically gauge when it is ok to terminate a pregnancy and when it is not. And it is not a good idea to simply say, well, let's play it safe, let's assume the fetus is alive/there's a ball in the box. That's silly.

Regardless of whether you think there's a ball in this box or not, though, I think we can all agree that the least knowledgable (and trustworthy) people in the room when it comes to this particular box are politicians. Why should they get say over how to handle this? Shouldn't it be a board of medically trained ethicists? If only we kept a group of people like that around to make these kind of tough decisions on a case-by-case basis for just such a scenario rather than having the uninformed pass laws. If only it were that simple, right?

Wait, though...we do have exactly that! It's called the medical board, and each state has one. And it oversees the individual actions of each individual doctor, and makes such calls on a regular basis. And they're even willing to admit they don't know for sure, they don't have all the answers, and they're struggling along as best they can based on the current state of knowledge. And that's what we pay them to do. So why not let them work?

The answer, my friends, is to simply repeal all law concerning abortion and let the doctors decide on a case-by-case basis. Doctors that make the wrong decisions that are out of line with the state's medical standards will be called on the carpet and punished, as the system is supposed to work. And no one has to pretend they know when life begins and no one has to spend any more federal tax money trying to convince others of this obvious falsity. And best of all, no one has to bring god into it.

It's astonishingly simple--the system would work just fine if we left such medical decisions up to the doctors and the informed authorities presiding over those situations...so let's just let the system work.

User Journal

Journal Journal: Political Discussion in America

Well, I've decided that political discussion is dead in America. And you know what killed it? Political correctness.

It's not that much of a leap, is it? Political correctness has had a chilling effect on intellectual debate and discourse in this country. At some point in the 90s, it became "incorrect" to say certain things or hold certain opinions. I remember hearing the two letters "PC" for the first time in my Junior year English course in high school, and it rubbed me the wrong way almost immediately. I'm only now beginning to crystallize why I felt such a twist in my gut...it squelched discussion of certain topics and placed high barriers to discussion of nearly all topics for fear of invoking the out-of-favor phrase du jour, marking oneself as an unenlightened ignoramus.

The other day I read that Americans are less engaged in discussing politics than ever before. Is it any wonder, when a simple verbal misstep has the ability to cast one as a racist, sexist, homophobe, etc? Already controversial topics went off-limits altogether over the last ten years or so, and the effect has been a slowdown in the free flow of ideas. The two sides have drifted farther apart, and the nation gets more divided. The sensible majority begins to feel more and more abandoned as the centers of the two major political parties drift ever further to the extremes.

And we're left in our present state, where one must express one's views about controversial issues with such caution it is often easier and safer to simply not say anything. The left is largely the perpetrators of this blight on our ability to communicate, and they have suffered the most damage because of it. How? Because critical analysis isn't required of one's opponent...it's enough to simply parse the person's language to form snap judgments about the quality of their ideas. (This kind of language fascism from the people who brought us the idea of "ebonics" at the height of the PC craze...oh, the irony.) Political correctness elevated knee-jerk reacting to the same level as critical thought in many ways. This, of course, means that over time, the knee-jerks will lose the debate.

And, oh, how they have lost. The White House is full of neo-cons, the Democratic Party is in disarray, and the far left are absolutely irate that news analysts like Bill O'Reilly have an audience. But it is you, far left person, that allowed all of this to happen, when you gave up convincing through rational argument and distanced yourself from the mainstream. And it is you who deserve to writhe in pain as you're forced to witness the reintroduction of thought derail your efforts to hijack the conversation.

Slashdot Top Deals

"Whoever undertakes to set himself up as a judge of Truth and Knowledge is shipwrecked by the laughter of the gods." -- Albert Einstein

Working...
OSZAR »