June 1, 2017

Programmers Should Take Linguistics

The older I get, the more I realize that 90% of all disagreements or social drama results from a miscommunication of some kind. Every time I wind up having to resolve a dispute, I'll try to get both sides of the story, only to realize that they're the same story, and both parties were actually either in agreement or fighting over a perceived insult that never actually existed. Unsurprisingly, a disproportionate amount of this miscommunication often involves programmers being far too strict with their interpretations of what certain words mean.

A linguistics class teaches you about what a language actually is - a bunch of sounds that we mutually agree mean certain things. Language is, intrinsically, a social construct. The correct definition of a word is whatever the majority of your primary social circle thinks it is. However, this also means that if you interact with a secondary social circle, and they all think the word means something else, then whenever you interact with them, it really does mean something else. Language is inherently contextual, and the possible meanings of a word can change based on who is saying it and to whom they're saying it to. If everyone else on Earth has decided that 'literally' can also mean 'figuratively', then it does, even if the dictionary says otherwise. It also means most people don't actually care if you say jif or gif, they'll just say whatever pronunciation gets you to shut up about it.

It's important to realize that a word's meaning is not defined by a dictionary, but rather by how people use it. The dictionary is simply a reflection of it's usage, and is generally a few years out of date. Just as the pronunciation of a word can vary by dialect, so can the potential meanings of a word. Meanings can be invented by regional subdialects and spread outward from there, which is the origin of many slang terms. Sometimes we invent entirely new words, like "dubstep", but young words may have fuzzy definitions. In some dialects of electronic music listeners, "dubstep" is not actually a a specific genre, but instead refers to all electronic music. Using dubstep to refer to any electronic song is currently incorrect if used in general parlance, because most people think it is referring to a very specific kind of music. However, if this usage of the word continues to be popularized, eventually the meaning of the word will change into a synonym for electronica, and the dictionaries will be updated to reflect this.

The fluid nature of language is why prescriptive grammar is almost always unnecessary, unless you are deliberately conforming to a grammar standard for a specific medium, such as writing a story. In almost any other context, so long as everyone in your social group understands your 'dialect' of English, then it is valid grammar. However, if you attempt to use this dialect outside of your social circle with people who are not familiar with it, you will once again be in the wrong, as they will have no idea what you're talking about. This, however, does not mean there are no mandatory grammar rules, it's just that most of the rules that are actually necessary to speak the language properly are usually so ingrained that you don't even think about them.

A fantastic example of this is a little known rule in English where all adjectives must come in a very specific order: opinion-size-age-shape-color-origin-material-purpose Noun. So you can have a lovely little old rectangular green French silver whittling knife, but if you switch the order of any of those adjectives you'll sound like a maniac. Conversely, you can never have a green great dragon. Despite the fact that this grammar rule is basically never talked about in any prescriptive grammar book, it is mandatory because if you don't follow it you won't be speaking proper English and people will have difficulty understanding you. True grammar rules are ones that, if not followed, result in nonsensical sentences that are difficult or impossible to parse correctly.

However, this does not mean all sentences that are difficult to understand have incorrect grammar. In fact, even some words are completely ambiguous by default. If I say I'm "dusting an object", the meaning of the phrase is completely dependent on what the object is. If it's a cake, I'm probably dusting it with something. If it's a shelf, I'm probably dusting it to get rid of the dust.

Programmers tend to be very literal minded people, and often like to think that language is a set of strict rules defined by their English class. In reality, language is a fluid, dynamic, ambiguous, constantly changing enigma that exists entirely because we all agree on what a bunch of sounds mean. We need to recognize this, and when we communicate to other people, we need to be on the lookout for potential misinterpretations of what we say, so we can provide clarifications when possible. If someone says something that seems ridiculous, ask them to clarify. I'm tired of resolving disagreements that exist only because nobody stopped to ask the other side to clarify what they meant.

Stop demanding that everyone explain things in a way you'll understand. That's impossible, because everyone understands language slightly differently. Instead, ask for clarification if someone seems to be saying something unusual or before you debate a point they made. Maybe then we can keep the debates to actual disagreements, instead of arguing over communication failures.

May 4, 2017

Why Bother Making An App?

I have an idea for an app. According to startup literature, I'm supposed to get initial fundraising from small-time investors, or "angel" investors, possibly with help from an incubator. Then, after using this money to build an MVP and push the product on the marketplace, I do a Series A round with actual venture capitalists. Now, the venture capitalists probably won't give me any money unless I can give them a proper financial outlook, user growth metrics, and a solid plan for expansion, along with a market cap estimation. Alternatively, I can just use enough meaningless buzzwords and complete bullshit to convince them to give me $120 million for a worthless piece of junk.

Either way, venture capitalists usually want a sizable 10-30% stake in your company (depending on if it's Series A, Series B, or Series C), given how much money they're pouring into a company that might fail. That's okay though, because my app does reasonably well and sells lots of copies on the app store and journalists write about it. Unfortunately, soon sales start tapering off, and ad revenue declines because customers either purchase the pro version or block the ads entirely. While the company is financially stable and making a modest profit, this isn't enough for the investors. They want growth, they need user engagement, they need ever increasing profits. Simply building a stable company isn't enough for them.

So the investors start pushing for you to be bought out. You get lucky, and your app would make a great accessory to Google Assistant, or Cortana, and you get huge buyout offers from Microsoft, Google, and Amazon, because they have more money than most small countries. Investors immediately push for you to take the most lucrative offer from whoever is willing to give you the most cash for fucking over all of your customers. You can push back, but your power is limited, because those investors hold a significant chunk of your company. At best, you can pick the offer that is least likely to completely destroy your product.

If you get lucky, your cross-platform app that worked on everything gets discontinued and re-integrated into one device that people have to buy due to vendor lock-in. If you aren't lucky, your app gets discontinued and completely forgotten about, until someone else comes up with the same idea and the process repeats. Maybe this time they'll get bought out and actually integrated into something.

Either way, your customers lose. Every time. They are punished for believing that a new app, by some new company, could actually survive long enough to be useful to them without being consumed by the corporate monstrosities that run the world. If the company founders are nice, maybe some of the employees walk away rich, but most of them will probably just end up trapped inside a corporate behemoth until they can't take it anymore and finally quit. In your efforts to make the world a better place, you've managed to screw over your company, your customers, and even your employees, because investors don't care about your product, they care about milking you for all you're worth.

But hey, at least you're rich, right?

March 23, 2017

Companies Can't Be Apolitical

One of the most common things I hear from people is that companies should be "apolitical". The most formal way this concept is expressed is that a company should make decisions based on what maximizes profits and not political opinions. Unfortunately, the statement "companies should only care about maximizing profits" is, itself, a political statement (and one I happen to disagree with). Thus, it is fundamentally impossible for a company to be truly apolitical, for the very act of attempting to be apolitical is a political statement.

How much a company can avoid politics generally depends on both the type and size of the company. Once your company becomes large enough, it will influence politics simply by virtue of its enormous size, and eventually becomes an integral part of political debates whether or wants to or not. Large corporations must take into account the political climate when making business decisions, because simply attempting to blindly maximize profit may turn the public against them and destroy their revenue sources—thus, politics themselves become part of the profit equation, and cannot be ignored. Certain types of businesses embody political statements simply by existing. Grindr, for example, is a dating app for gay men. It's entire business model is dependent on enabling an activity that certain fundamentalists consider inherently immoral.

You could, theoretically, try to solve part of this quandary by saying that companies should also be amoral, insofar that the free market should decide moral values. The fundamentalists would then protest the companies existence by not using it (but then, they never would have used it in the first place). However, the problem is that, once again, this very statement is itself political in nature. Thus, by either trying to be amoral or moral, a company is making a political statement.

The issue at play here is that literally everything is political. When most everyone agrees on basic moral principles, it's easier to pretend that politics is really just about economic policy and lawyers, but our current political divisions have demonstrated that this is a fantasy. Politics are the fundamental morals that society has decided on. It's just a lot easier to argue about minor differences in economic policy instead of fundamental differences in basic morality.

Of course, how companies participate in politics is also important to consider. Right now, a lot of companies participate in politics by spending exorbitant amounts of money on lobbyists. This is a symptom of money in general, and should be solved not by removing corporate money from politics, but removing all money, because treating spending money as a form of speech gives more speech to the rich, which inherently discriminates against the poor and violates the constitutional assertion that all men are created equal (but no one really seems to be paying attention to that line anyway).

Instead of using money, corporations should do things that uphold whatever political values they believe in. As the saying goes, actions speak louder than words (or money, in this case). You could support civil rights activism by being more inclusive with your hiring and promoting a diverse work environment. Or, if you live in the Philippines, you could create an app that helps death squads hunt down drug users so they can be brutally executed. What's interesting is that most people consider the latter to be a moral issue as opposed to a political one, which seems to derive from the fact that once you agree on most fundamental morals, we humans simply make up a bunch of pointless rules to satisfy our insatiable desire to tell other humans they're wrong.

We've lived in a civilized world for so long, we've forgotten the true roots of politics: a clash between our fundamental moral beliefs, not about how much parking fines should be. Your company will make a political statement whether you like it or not, so you'd better make sure it's the one you want.

March 7, 2017

I Can't Hear Anything Below 80 Hz*

* at a comfortable listening volume.
EDIT: I have confirmed all the results presented here by taking the low frequency test with someone standing physically next to me. They heard a tone beginning at 30 Hz, and by the time I could hear a very faint tone around 70 Hz, they described the tone as "conversation volume level", which is about 60 dB. I did not reach this perceived volume level until about 120 Hz, which strongly correlates with the experiment. More specific results would require a professional hearing test.

For almost 10 years, I've suspected that something was wrong with my ability to hear bass tones. Unfortunately, while everyone is used to people having difficulty hearing high tones, nobody takes you seriously if you tell them you have difficulty hearing low tones, because most audio equipment has shitty bass response, and human hearing isn't very precise at those frequencies in the first place. People generally say "oh you're just supposed to feel the bass, don't worry about it." This was extremely frustrating, because one of my hobbies is writing music, and I have struggled for years and years to do proper bass mixing, which is basically the only activity on the entire planet that actually requires hearing subtle changes in bass frequencies. This is aggravated by the fact that most hearing tests are designed to detect issues with high frequencies, not low frequencies, so all the basic hearing tests I took at school gave test results back that said "perfectly normal". Since I now have professional studio monitor speakers, I'm going to use science to prove that I have an abnormal frequency sensitivity curve that severely hampers my ability to differentiate bass tones. Unfortunately, at the moment I live alone and nowhere near anyone else, so I will have to prove that my equipment is not malfunctioning without being able to actually hear it.

Before performing the experiment, I did this simple test as a sanity check. At a normal volume level, I start to hear a very faint tone in that example at about 70 Hz. When I sent it to several other people, they all reported hearing a tone around 20-40 Hz, even when using consumer-grade hardware. This is clear evidence that something is very, very wrong, but I have to prove that my hardware is not malfunctioning before I can definitively state that I have a problem with my hearing.

For this experiment, I will be using two JBL Professional LSR305 studio monitors plugged into a Focusrite Scarlett 2i2. Since these are studio monitors, they should have a roughly linear response all the way down to 20 Hz. I'm going to use a free sound pressure app on my android phone to prove that they have a relatively linear response time. The app isn't suitable for measuring very quiet or very loud sounds, but we won't be measuring anything past 75 dB in this experiment because I don't want to piss off my neighbors.

Speaker Frequency Response Graph

The studio monitor manages to put out relatively stable noise levels until it appears to fall off at 50 Hz. However, when I played a tone of 30 Hz at a volume loud enough for me to feel, the sound monitor still reported no pressure, which means the microphone can't detect anything lower than 50 Hz (I was later able to prove that the studio monitor is working properly when someone came to visit). Of course, I can't hear anything below 50 Hz anyway, no matter how loud it is, so this won't be a problem for our tests. To compensate for the variance in the frequency response volume, I use the sound pressure app to measure the actual sound intensity being emitted by the speakers.

The first part of the experiment will detect the softest volume at which I can detect a tone at any frequency, starting from D3 (293 Hz) and working down note by note. The loudness of the tone is measured using the sound pressure app. For frequencies above 200 Hz, I can detect tones at volumes only slightly above the background noise in my apartment (15 dB). By the time we reach 50 Hz I was unwilling to go any louder (and the microphone would have stopped working anyway), but this is already enough for us to establish 50 Hz as the absolute limit of my hearing ability under normal circumstances.

Threshold of Hearing Graph

To get a better idea of my frequency response at more reasonable volumes, I began with a D4 (293 Hz) tone playing at a volume that corresponded to 43 dB SPL on my app, and then recorded the sound pressure level of each note once it's volume seemed to match with the other notes. This gives me a rough approximation of the 40 phon equal loudness curve, and allows me to overlay that curve on to the ISO 226:2003 standard:

Equal Loudness Contour

These curves make it painfully obvious that my hearing is severely compromised below 120 Hz, and becomes nonexistent past 50 Hz. Because I can still technically hear bass at extremely loud volumes, I can pass a hearing test trying to determine if I can hear low tones, but the instant the tones are not presented in isolation, they are drowned out by higher frequencies due to my impaired sensitivity. Because all instruments that aren't pure sine waves produce harmonics above the fundamental frequency, this means the only thing I'm hearing when a sub-bass is playing are the high frequency harmonics. Even then, I can still feel bass if it's loud enough, so the bass experience isn't completely ruined for me, but it makes mixing almost impossible because of how bass frequencies interact with the waveform. Bass frequencies take up lots of headroom, which is why in a trance track, you can tell where the kicks are just by looking at the waveform itself:

Bass Example

When mixing, you must carefully balance the bass with the rest of the track. If you have too much bass, it will overwhelm the rest of the frequencies. Because of this, when I send my tracks to friends to get help on mixing, I can tell that the track sounds better, but I can't tell why. The reason is because they are adjusting bass frequencies I literally cannot hear. All I can hear is the end result, which has less frequency crowding, which makes the higher frequencies sound better, even though I can't hear any other difference in the track, so it seemes like black magic.

It's even worse because I am almost completely incapable of differentiating tones below 120 Hz. You can play any note below B2 and I either won't be able to hear it or it'll sound the same as all the other notes. I can only consistently differentiate semitones above 400 Hz. Between 120-400 Hz, I can sometimes tell them apart, but only when playing them in total isolation. When they're embedded in a song, it's hopeless. This is why, in AP Music Theory, I was able to perfectly transcribe all the notes in the 4-part writing, except the bass, yet no other students seemed to have this problem. My impaired sensitivity to low frequencies mean they get drowned out by higher frequencies, making it more and more difficult to differentiate bass notes. In fact, in most rock songs, I can't hear the bass guitar at all. The only way for me to hear the bass guitar is for it to be played by itself.

Incidentally, this is probably why I hate dubstep.

For testing purposes, I've used the results of my sensitivity testing to create an EQ filter that mimics my hearing problems as best I can. I can't tell if the filter is on or off. For those of you that use FL Studio, the preset can be downloaded here.

EQ Curve

Below is a song I wrote some time ago that was mastered by a friend who can actually hear bass, so hopefully the bass frequencies in this are relatively normal. I actually have a bass synth in this song I can only barely hear, and had to rely almost entirely on the sequencer to know which notes were which.



This is the same song with the filter applied:



By inverting this filter, I can attempt to "correct" for my bass hearing, although this is only effective down to about 70 Hz, which unfortunately means the entire sub-bass spectrum is simply inaudible to me. To accomplish this, I combine the inverted filter with a mastering plugin that completely removes all frequencies below 60 Hz (because I can't hear them) and then lowers the volume by about 8 dB so the amplified bass doesn't blow the waveform up. This doesn't seem to produce any audible effect on songs without significant bass, but when I tried it on a professionally mastered trance song, I was able to hear a small difference in the bass kick. I also tried it on Brothers In Arms and, for the first time, noticed a very faint bass cello going on that I had never heard before. If you are interested, the FL studio mixer state track that applies the corrective filter is available here, but for normal human beings the resulting bass is probably offensively loud. For that same reason, it is unfortunately impractical for me to use, because listening to bass frequencies at near 70 dB levels is bad for your hearing, and for that matter it doesn't fix my impaired fidelity anyway, but at least I now know why bass mixing has been so difficult for me over the years.

I guess if I'm going to continue trying to write music, I need to team up with one of my friends that can actually hear bass.

February 13, 2017

Windows Won't Let My Program Crash

It's been known for a while that windows has a bad habit of eating your exceptions if you're inside a WinProc callback function. This behavior can cause all sorts of mayhem, like your program just vanishing into thin air without any error messages due to a stack overflow that terminated the program without actually throwing an exception. What I didn't realize is that it also eats assert(), which makes debugging hell, because the assertion would throw, the entire user callback would immediately terminate without any stack unwinding, and then windows would just... keep going, even though the program is now in a laughably corrupt state, because only half the function executed.

While trying to find a way to fix this, I discovered that there are no less than 4 different ways windows can choose to eat exceptions from your program. I had already told the kernel to stop eating my exceptions using the following code:
HMODULE kernel32 = LoadLibraryA("kernel32.dll");
  assert(kernel32 != 0);
  tGetPolicy pGetPolicy = (tGetPolicy)GetProcAddress(kernel32, "GetProcessUserModeExceptionPolicy");
  tSetPolicy pSetPolicy = (tSetPolicy)GetProcAddress(kernel32, "SetProcessUserModeExceptionPolicy");
  if(pGetPolicy && pSetPolicy && pGetPolicy(&dwFlags))
    pSetPolicy(dwFlags & ~EXCEPTION_SWALLOWING); // Turn off the filter 
However, despite this, COM itself was wrapping an entire try {} catch {} statement around my program, so I had to figure out how to turn that off, too. Apparently some genius at Microsoft decided the default behavior should be to just swallow exceptions whenever they were making COM, and now they can't change this default behavior because it'd break all the applications that now depend on COM eating their exceptions to run properly! So, I turned that off with this code:
CoInitialize(NULL); // do this first
if(SUCCEEDED(CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_PKT_PRIVACY,
  RPC_C_IMP_LEVEL_IMPERSONATE, NULL, EOAC_DYNAMIC_CLOAKING, NULL)))
{
  IGlobalOptions *pGlobalOptions;
  hr = CoCreateInstance(CLSID_GlobalOptions, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pGlobalOptions));
  if(SUCCEEDED(hr))
  {
    hr = pGlobalOptions->Set(COMGLB_EXCEPTION_HANDLING, COMGLB_EXCEPTION_DONOT_HANDLE);
    pGlobalOptions->Release();
  }
}
There are two additional functions that could be swallowing exceptions in your program: _CrtSetReportHook2 and SetUnhandledExceptionFilter, but both of these are for SEH or C++ exceptions, and I was throwing an assertion, not an exception. I was actually able to verify, by replacing the assertion #define with my own version, that throwing an actual C++ exception did crash the program... but an assertion didn't. Specifically, an assertion calls abort(), which raises SIGABRT, which crashes any normal program. However, it turns out that Windows was eating the abort signal, along with every other signal I attempted to raise, which is a problem, because half the library is written in C, and C obviously can't raise C++ exceptions. The assertion failure even showed up in the output... but didn't crash the program!
Assertion failed!

Program: ...udio 2015\Projects\feathergui\bin\fgDirect2D_d.dll
File: fgEffectBase.cpp
Line: 20

Expression: sizeof(_constants) == sizeof(float)*(4*4 + 2)
No matter what I do, Windows refuses to let the assertion failure crash the program, or even trigger a breakpoint in the debugger. In fact, calling the __debugbreak() intrinsic, which outputs an int 3 CPU instruction, was completely ignored, as if it simply didn't exist. The only reliable way to actually crash the program without using C++ exceptions was to do something like divide by 0, or attempt to write to a null pointer, which triggers a segfault.

Any good developer should be using assertions to verify their assumptions, so having assertions silently fail and then corrupt the program is even worse than ignoring they exist! Now you could have an assertion in your code that's firing, terminating that callback, leaving your program in a broken state, and then the next message that's processed blows up for strange and bizarre reasons that make no sense because they're impossible.

I have a hard enough time getting my programs to work, I didn't think it'd be this hard to make them crash.