Thursday, April 28, 2011

Education in America - Unemployment by field of study

As the costs of education in America continue to rise, people must seriously consider the value of education as it relates to the job market.  Education isn't just about getting a job, but becoming a "professional" something in life often does involve some sort of specialty education, particularly at the collegiate level.  The U.S. Bureau of Labor Statistics tracks both employment and unemployment numbers each year and they break it down by field (or trade).  Unfortunately they don't do a great job of presenting the information in a format that's easily analyzed, so I imported the numbers into Excel and did some manipulation.  Here are the results (click on image for full size):


Unemployment by field of study - from the U.S. Bureau of Labor Statistics


In good economic times, most professional fields enjoy a relatively low rate of unemployment, below 3 percent.  But in bad economic times, engineering and business fields take the brunt of the professional layoffs.  Management of course tends to stay below the previous two since they're the ones doing the layoffs.  The absolute best job security is in the healthcare and legal fields. The following table is a more detailed view of the data, including more professions.


Unemployment rate by field detailed table - from the U.S. Bureau of Labor Statistics


Notable fields are the construction industry which hit 20% unemployment in 2010, followed by the farming industry close behind at 16%.  For those who may be curious about the actual numbers of workers involved, here they are (click to enlarge).  Be sure to compare these to the number of people graduating each year by field of study.

  
Employment numbers by field - from the U.S. Bureau of Labor Statistics

Next up are the salaries for these fields and the trends over the last 10 years.

Monday, April 25, 2011

Decomposing a song into chords, Part 4

In this part we take the program from part 3 and execute it against three real songs.  The first song is Tom Petty's "You Don't Know How it Feels", the second song is Green Day's "When its Time", and the third song is Coldplay's "Yellow".

Beginning with the first portion of the Tom Petty song, the following is a spectrum compiled from time 18 to 25 seconds.
Tom Petty - You Dont Know How it Feels, time 18 to 25 seconds

The spectrum is clearly more crowded than in the single guitar case from part 2.  The bass guitar is visible around 50 Hz while the vocals are intermixed with the acoustic guitar at the higher frequencies.  Luckily, most songs follow the basic rules of music theory in that the vocal frequencies match the guitar note frequencies such that a proper chord is maintained between them.  The output of this time window is shown below.  Note that t = 0 is at 18 seconds in the song.
Time =   0.0, Chord = B   , Notes = B 
Time =   0.3, Chord = E   , Notes = B  E 
Time =   1.0, Chord = B   , Notes = B 
Time =   1.8, Chord = Esus, Notes = A  B 
Time =   2.8, Chord = Esus, Notes = A  B 
Time =   3.6, Chord = E   , Notes = E 
Time =   4.9, Chord = F#   , Notes = F#
Time =   5.6, Chord = A   , Notes = A  E

This performance is quite poor.  The proper chord progression in the song at this point in time was E - A - E - A.  Examining the underlying notes however, we can see that the algorithm did correctly measure parts of the chord, but not enough to properly reconstruct it.  The E chord has a B in it, and obviously the A chord has an A in it.  The F# is a total mystery.

Moving on to Green Day's "When its Time", the following is a spectrum from the first 11 seconds of the song.

[caption id="attachment_263" align="aligncenter" width="366" caption="Green Day - When it's Time, time 0 to 11 seconds"][/caption]

Note that there is nothing at 50 Hz where the bass guitar was in the last song.  This is because the beginning of "When its Time" is acoustic and vocals only.  The following is the program output for the same time period.
Time =   0.0, Chord = G   , Notes = D  G 
Time =   1.5, Chord = G   , Notes = B  D  G 
Time =   3.3, Chord = D   , Notes = A  D  F#
Time =   4.1, Chord = D   , Notes = D 
Time =   5.9, Chord = G   , Notes = B  D 
Time =   9.7, Chord = G   , Notes = G

This shows some improvement.  The proper chord progression for this time period in the song is G - D/F# - Em - C.  The Em is missed, but the ending C chord is detected as a G note (the fifth in C).  This performance though leaves much to be desired.

The spectrum for the final song, Coldplay's "Yellow" is shown below.  This spectrum is for a time window of 33 to 50 seconds (the first main verse), showing guitar, bass, drums, and lyrics.

[caption id="attachment_264" align="aligncenter" width="366" caption="Coldplay - Yellow, time 33 to 50 seconds"][/caption]

The following is the program output for that same time period.



Time =   0.0, Chord = B   , Notes = B  F#
Time =   2.3, Chord = B   , Notes = B  F#
Time =   3.8, Chord = B   , Notes = B  F#
Time =   4.4, Chord = NA , Notes = B  C  F#
Time =   5.4, Chord = B   , Notes = B  F#
Time =   6.1, Chord = F#   , Notes = C# F#
Time =   6.7, Chord = F#   , Notes = F#
Time =  10.0, Chord = F#   , Notes = C# F#
Time =  10.5, Chord = F#   , Notes = F#
Time =  11.3, Chord = NA , Notes = E  F  F#
Time =  11.8, Chord = NA , Notes = E  F 
Time =  12.5, Chord = Bsus, Notes = E  F#
Time =  13.1, Chord = E   , Notes = E 
Time =  14.1, Chord = NA , Notes = B  E  F 
Time =  15.4, Chord = E   , Notes = B  E 
Time =  16.1, Chord = NA , Notes = E  F

The performance here is actually quite good, as the algorithm managed to pick out all three chords in the progression B - F# - E.  There is some trouble with the E chord, as the neighboring F note is being incorrectly measured at times.  This note is only a half step away and could be the result of difficulty in measuring the E note cleanly within the presence of noise.


The algorithm performance against these three songs shows some promise, but the performance and accuracy clearly needs to be improved.  In the next and final step, we will improve the accuracy of the program to the point where it can reliably reconstruct the chords in these same three songs.  Additional time windows will be measured as well to demonstrate performance at different points within the songs.


 

Wednesday, April 20, 2011

Education in America - high school grade inflation

Most efforts to reform education in America come with some sort of push to improve graduation rates.  For some strange reason, graduation rates are seen as an excellent measure of school and teaching success.  In reality, graduation rates are largely irrelevant, and what really matters is how much the students actually know.  When schools and teachers are evaluated based on graduation rates, their natural response is to simply graduate more students.  They don’t actually have to teach any better than before and students don’t need to know any more than they did before.  In fact, standardized testing, the only reasonable way to actually compare performance from teacher to teacher, school to school and state to state, is getting a bad rap for being unfair to certain groups or inaccurate in the measurement of knowledge.  Maybe they can be unfair in some ways, but they are certainly better than no measure at all.  The National Center for Education Statistics doesn’t have data on the breakdown of high school grades, so proving that grade inflation is occurring is difficult.  But there is other data available that strongly supports that conclusion.

First off, high school graduation rates have been steadily improving for the last 30 years, consistent with the push to improve school performance.  The following table shows the national average dropout and graduation rates (all ethnicity combined) from 1980 to 2008.  The dropout rate was nearly cut in half.  One would assume then that the students must know more on average, since more of them are graduating.

National Dropout/Graduation Rates by Year - from NCES

Looking at standardized test scores however, we see that they don’t.  The same math test was given to 9, 13, and 17 year-olds from 1973 to 2008.  The scale range was 0 to 500.  The 9 and 13 year old show noticeable improvement in those years, however the 17 year olds haven't changed.  Apparently the primary schools are doing something right that the secondary schools are lacking.


Standardized Test Scores

Finally, there is some data available at the college level showing the high school grades of incoming freshmen. High school GPA can be correlated to remedial classes taken by the students in college for two time periods, 1995 and 2004.  Sure enough, the percentage of straight A students needing remedial coursework in college more than doubled in that time period, while the overall percentage went up only slightly.  This is particularly troubling if you consider that colleges are also experiencing grade inflation, so the bar on what would necessitate remedial coursework is no doubt lower today than in the past.


Remedial College Courses


So there you have it.  Yes, graduation rates are improving, but no, education in America is not better off for it.



References

http://nces.ed.gov/datatools/index.asp?DataToolSectionID=4



College Education in America - poor choice of majors


President Obama’s 2011 State of the Union address stated his desire to see this country return to technological greatness.  He sees future prosperity directly tied to our ability to innovate and drive new technologies and industries.  To that end, college education is critical in teaching the skills professionals need in the workplace.  But education in America, and “college education” in particular, is a very broad term that produces graduates with a huge range of capabilities in the workforce.  Science and Engineering are the fields of study that invent technologies and create new industries.  Yet education in America produces more art majors than engineering majors, way more psychology majors than engineering majors, and four times more business majors than engineering majors.  More disturbing is the fact that the trend is worsening with every passing year.


Number of Degrees by Major - from NCES

Interestingly enough, the Huffington Post recently featured an article detailing two different points of view on education in America from two of the most famous engineers of all time.  Bill Gates supports targeted investment in science and engineering fields of study, while Steve Jobs supports a broader educational background including arts and humanities.  One the one hand, all the engineering knowledge on the planet is useless without creative and relevant applications of the knowledge.  On the other hand, you can’t creatively apply anything if you don’t have the difficult, tedious, unglamorous technical  knowledge.  Many students are turned off by engineering for this very reason, it isn’t fun.  A large part of education in America is about having fun.  Ideally we could create a program that combines both Gates and Jobs’ points of view.  Focus engineering education in America on hard technical knowledge, but ground it in real world creative applications.

Before any of that matters however, students need to actually enter the engineering field of study.  An overwhelming majority of students choose liberal arts and business fields.  There is certainly an important and necessary place for these sorts of majors in our society, but when you’ve got 8.5 liberal arts majors for every 1 engineering major, something is wrong.  The economy has shifted from invention, development, and production, to services and management.  Invention, development, and production (i.e. innovation) is occurring in other parts of the world now, and is no longer being led by America.  In fact, looking at the percentage of degrees by field of study across the world, education in America is clearly trailing in engineering, math, and science.  30% of German students graduate with a science or engineering degree, 28% of French, 25% of Japanese students do the same.  In America, only 16% of college students graduate with a degree in any science, math, or engineering field of any kind.  The world average is 23%, that’s sad.


Degrees by Country - from NCES

Ironically, Japan seems to have both the largest percentage of engineering majors and the largest percentage of arts majors at the same time.  They must be taking the advice of both Gates and Jobs.  At the end of the day, education in America boils down to our culture and what career young students choose for themselves.  Teenagers have no idea what these jobs actually entail, so most of their career choices are based on society’s perception of the fields.  Businessmen are powerful and strong, engineers are geeks.  High school students are by default pressured into attending college and told to pick a major that’s “fun” for them since they’ll be doing it for the rest of their lives.  We need to fix the perception of science and engineering education in America, and make students realize that difficult technical fields are “fun” too, and nothing beats changing the world for the better.

For those of you not interested in changing the world, here is the salary information by job field.  Money is usually what it all boils down to anyway.


References:
National Center for Education Statistics: http://nces.ed.gov/fastfacts/display.asp?id=37

Saturday, April 16, 2011

Decomposing a song into chords, Part 3

In the last part, we showed how to reconstruct a reasonably accurate train of notes at each time interval throughout a song, using real guitar audio input.  In this part, we reconstruct the parent chords from which the notes originate at each instant in time.  Starting where we left with the note_intervals variable, after filtering out transients and duplicates, we take the remaining notes at each time interval and attempt to place them in parent chords.

The first step is to define the allowable structures of parent chords.  This is done by setting up a matrix of interval groupings, one row for each chord.  In this part, we will limit the matrix to just the 12 chords with a major third and perfect fifth.  The resulting 12x3 chord table defines the chord structures for the 12x1 vector of notes from part 1.
(1) notes = [27.5; ...     // A
29.13524; ... // A#
30.86771; ... // B
32.7032; ...  // C
34.64783; ... // C#
36.7081; ...  // D
38.89087; ... // D#
20.60172; ... // E
21.82676; ... // F
23.12465; ... // F#
24.49971; ... // G
25.95654];    // G#

chord_table = [1 5 8; ...  // A
2 6 9; ...  // A#
3 7 10; ... // B
4 8 11; ... // C
5 9 12; ... // C#
6 10 1; ... // D
7 11 2; ... // D#
8 12 3; ... // E
9 1 4; ...  // F
10 2 5; ... // F#
11 3 6; ... // G
12 4 7];    // G#

The unique set of notes in the note_intervals variable can now be compared against the notes in the chord table, find applicable chords.  Since only some of the notes in the chord may be detected at a time, the algorithm must be tolerate of missing notes.  This is done by first assuming that any chord could be the correct one, and then whittling away at the possibilities one by one for each detected note at each instant in time.
(2) unique_intervals = unique(note_intervals(idx(ii),find(note_intervals(idx(ii),:) > 0)));
root_chords = 1:12;
for jj=1:length(unique_intervals)
[rr cc] = find(chord_table == unique_intervals(jj));
root_chords = intersect(root_chords, rr);
end

The variable root_chords should now (ideally) contain one remaining chord, the correct chord for the detected notes.  If only one note is detected in a particular time interval, it is assumed that that note is the root of the chord.  The root_chord along with the underlying notes are then printed to the screen for each instant in time.  The following is a screen shot of the same time interval reported in part 2.  The underlying notes have been reduced to just the unique components.
Time =  18.9, Chord = G  , Notes = B  G 
Time =  22.3, Chord = C  , Notes = E  G 
Time =  23.0, Chord = G  , Notes = D  G 
Time =  23.6, Chord = G  , Notes = G 
Time =  24.1, Chord = G  , Notes = B  G 
Time =  24.6, Chord = G  , Notes = G 
Time =  25.1, Chord = C  , Notes = E  G 
Time =  25.6, Chord = NA , Notes = A  D  E 
Time =  27.4, Chord = C  , Notes = E  G 
Time =  28.4, Chord = G  , Notes = D  G 
Time =  29.2, Chord = G  , Notes = B  G 
Time =  30.2, Chord = C  , Notes = E  G 
Time =  31.0, Chord = D  , Notes = A  D 
Time =  32.8, Chord = C  , Notes = E  G 
Time =  33.5, Chord = F  , Notes = A  F 
Time =  34.3, Chord = F  , Notes = A  F 
Time =  35.3, Chord = C  , Notes = E  G 
Time =  36.4, Chord = F  , Notes = A  F 
Time =  37.9, Chord = C  , Notes = C  E  G 
Time =  38.9, Chord = NA , Notes = A  E  F 
Time =  39.4, Chord = F  , Notes = A  F 
Time =  43.0, Chord = C  , Notes = E  G 
Time =  44.0, Chord = NA , Notes = A  D  E 
Time =  45.1, Chord = D  , Notes = A  D 
Time =  45.6, Chord = C  , Notes = E  G 
Time =  46.6, Chord = G  , Notes = D  G 
Time =  47.4, Chord = G  , Notes = B  G

Note that there are several "NA"s reported for chords that could not be properly reconstructed.  Examining the underlying notes reveals an invalid interval present for the parent chord.  This could be due to natural human error in playing the guitar riff, or measurement error in the FFT from part 1.  Also note that in many cases, only two notes were detected of the three that make up the chord.  In a few cases, only one note was detected.  However, the basic chord progression of G - C - D - C - G is clearly present with the bridge of C - F - C - F visible at 33 seconds.  In the next part, we attempt to run the algorithm against a full song (guitar, bass, lyrics, drums) and make tweaks as necessary.

Thursday, April 14, 2011

Decomposing a song into chords, Part 2

In the first part of this series, we showed how to mathematically decompose the contents of a song into frequency and amplitude components, and how to identify the notes corresponding to the detected frequencies above a certain amplitude threshold.  The algorithm was run against a simulated input for four notes, taken as two 2-note chords.

Now we run the algorithm against a real audio file containing an electric guitar playing the following chord progression: G - C - D - C  - G with three bridges containing a different chord progression of F - C - F - C.  The following is a SciLab output using the mapsound function illustrating the audio input file.

The bridge chords (F - C - F - C) can be seen at times of roughly 35, 65, and 90 seconds.  The audio is was recorded at 44.1 kHz and down-sampled to 6 kHz.  The audio input file can be found here: audio_input.  Once loaded into SciLab, the audio is further down-sampled to 1 kHz to produce a spectrum with a bandwidth of 500 Hz.
(1) [x Fs bits] = auread("test_audio");
dec_factor = Fs/1000;
x = x(1:round(dec_factor):length(x));
Fs = Fs/dec_factor;

The input signal is then run through the algorithm as was done in previous step.  The note computation at the end has been modified to address the confusion introduced during chord changes.  The FFT sampling rate is presumed to be fast enough that at least several consecutive data sets will contain the same notes of the chord at each point in the chord progression.  The changes in chords then appear as transients and can be filtered out.  The data set can be further reduced by filtering out duplicates that appear at consecutive time samples during longer duration notes and chords.
(2) for ii=2:(size(note_intervals,1)-1)
if ~and(unique(note_intervals(ii,:)) == unique(note_intervals(ii-1,:))) & ...
~and(unique(note_intervals(ii,:)) == unique(note_intervals(ii+1,:))) then
note_intervals(ii,:) = 0;
continue;
end
if ~and(unique(note_intervals(ii-1,:)) == unique(note_intervals(ii,:))) then
idx(length(idx)+1) = ii;
end

where note_intervals was the output variable computed in part 1, and ii is the index over the time domain.  Finally, adjacent frequency bin detections have been observed to cause half step errors in the note computation.  They must be merged with the main detection bin as follows:
(3) bin_diff = diff(c(ii,:));
for jj=1:length(bin_diff)
if bin_diff(jj) == 1 then
c(ii,jj+1) = mean([c(ii,jj+1),c(ii,jj)]);
end
end
c(ii,:) = round(c(ii,:));

Executing the algorithm with the preceding changes results in a fairly accurate reconstruction of the notes in the audio file input.  The following is an excerpt from the SciLab output.
Time =  18.9, G  G  B  B  G  G 
Time =  22.3, E  E  E  G 
Time =  23.0, D  D  G  G  G 
Time =  24.8, E  E  G  G  G 
Time =  25.6, E  E  D  D  D  E  E  A  A  A  A  A 
Time =  26.6, E  E  A  A  D  D  D  D  D  E  E  A  A  A  D  D  D  G# G# A  A 
Time =  27.4, E  E  E  G  G  G 
Time =  28.4, G  G  D  D  G  G  G 
Time =  29.2, G  B  G  G  G 
Time =  32.0, A  D  D  D  A  A  A 
Time =  32.8, E  E  E  E  E  G  G  G  G 
Time =  34.0, A  A  E  F  F  F  A  A  A  A 
Time =  35.3, E  E  E  G  G 
Time =  36.9, A  A  F  F  F  A  A  A  A 
Time =  37.9, E  C  E  E  E  G  G  G 
Time =  38.9, E  F  F  F  A  A 
Time =  39.4, F  F  F  F  A  A  A  A 
Time =  40.4, E  E  C  E  E  E  G  G  G 
Time =  41.7, G  G  B  G  G 
Time =  42.2, G  G  G  B  B  D  G  G  G 
Time =  43.0, E  E  E  E  G  G  G 
Time =  44.0, E  E  A  D  D  E  E  A  A  A  D  D 
Time =  46.6, G  G  G  D  D  G  G  G 
Time =  47.4, G  G  G  B  G  G

Note the detection of the new progression at 34 seconds, corresponding to the F - C - F - C chord progression.  This output represents the highest amplitude notes detected at each point in time.  The next step is to determine the intervals of the notes at each instant in time to reconstruct the parent chord that these notes belong to.

Wednesday, April 13, 2011

Decomposing a song into chords, Part 1

This is the first of a multi-part series on how to write a program for decomposing a song that analyzes the content of a song into notes and chords.  In this part, an algorthim is developed to measure the frequency and amplitude components in a wideband audio input and determine the musical notes associated with all detected frequencies.  The algorithm is executed against a simulated input of four notes, constructed as two 2-note chords.  In part 2, the algorithm will be tweaked to execute against real audio input from an electric guitar.

This part utilizes SciLab, an excellent open-source (free) alternative to Matlab with similar syntax and funtionality.

We begin by defining the four input signals as sine waves at the specified frequencies, amplitudes, and sampling rates.  Simulated noise is added to give it a bit of realism.  Equation 1 defines a single input signal.
(1) x = 40*sin(2*pi*f1/Fs*n) + noise1;

where f1 is the frequency, Fs is the sampling rate, n is a vector of 1:length(n), and noise1 is random noise about a normal distribution.

The variable x now contains the full "song".  This song is processed by taking N samples at a time  (from the time domain) and computing the fast fourier transform (FFT) of length N on the samples to produce the variable y in the frequency domain.  A hamming window is applied prior to the FFT to widen the main lobe of the filter.  The samples are then shifted by Fs/Fso where Fso is the desired output rate of the FFT.
(2) for ii=1:(Fs/Fso):(length(x)-N*(ppf*M+1))
xx = x(ii:(ii+N));
nn = window('hm',length(xx)) .* xx;
y_temp = fft(nn);

Next the output of the FFT (y_temp) is cut in half at the Nyquist frequency to remove the upper image.  The result is then decimated by a factor of M (if desired) to improve processing speed.  The result is stored in the matrix y for this instant in time.
(3) y_temp = y_temp(1:N/2);
y_temp = y_temp(1:M:length(y_temp));
y(idx,:) = y_temp;

The frequency data (y_temp) at this time point is examined to determine the peak amplitude and record any frequency bins that exceed a specified threshold below the peak amplitude.  This stored away and used later to determine the notes present in the song at this instant in time.
(4) nidx = find(dbphi(abs(y_temp)) > thresh);
c(idx,1:length(nidx)) = nidx;

where thresh is an arbitrary threshold and c is the output matrix used to store the frequency bins containing notes.

After the completion of the loop, the frequency bins containing notes (the matrix c) are analyzed to determine what notes are present at each instant in time.   First, the frequency of each detection bin must be determined.  Then, the note corresponding to that frequency is computed by dividing the bin frequency by the base frequencies (a vector) of A through G (i.e. the lowest frequency of each note).  The result is a vector of multiples, corresponding to how well each base note fits with the detected frequency.  The correct note will be the one closest to a power of two.
(5) for ii=1:size(c,1)
for jj=1:size(c,2)
note_freq = f(c(ii,jj));
note_base = ones(12,1)*note_freq ./ notes;
note_base = abs(log2(note_base) - round(log2(note_base)));
note_base_idx = find(note_base == min(note_base));
note_intervals(ii,jj) = note_base_idx;

where ii is the index to the time domain, jj is the index to the frequency domain at each instant in time, f is a vector converting bin number to frequency, and notes is a vector of the base note frequencies (A through G).  The result (note_intervals) is an index into the notes vector corresponding to the correct note.

All of the notes at a given instant in time are collected and compared with the previous instant in time to detect chord changes.  Each time a change is detected, the detected notes are printed to the screen.  The following is a screen shot using C G A and D as input signals, taken two at time, with a switch in the middle of the song.
 G  G  C  C
G  G  A  A  C  C  C
G  G  A  A  A  A# B  C  C  C
D  G  G  G  A  A  A  A# A# B  C  C  C
D  D  G  A  A  A  A# A# B  C  C  C
D  D  A  A  A  A# B  C  C
D  D  A  A  A  A#
D  D  A  A

Note the confusion in the middle during the chord change.  The following is a screenshot of the FFT output, depicting the four notes, two at time, with the confusion during the chord change. In the next step of this program development, we will determine what chords are present at each instant in time by looking at the note intervals on the scale.  We will also remove the confusion caused by chord changes.  The program will then be executed against a real guitar audio input file.

Friday, April 8, 2011

What's really wrong with education in America

Whenever the economy turns sour and the government goes looking for things to cut, education in America is always a top target.  Many Americans think that we spend more money on education than the rest of the world, and get poorer results.  Therefore, the conventional thinking is that we should cut the spending and “fix” the schools to improve performance.  The truth is that education in America does cost more and generally gets the same or poorer results.  But teacher's salaries are not the problem, America already pays its teachers far less than other Western countries.  Here is the data on what’s really going on.

UNESCO Data

First up is spending.  In the most recent year that data is available (2007), spending on primary/secondary education in America was 3.8% of GDP.  For comparison, the UK spent 3.3% and Australia spent 3.0%.  So America is indeed spending more, 15% more than the UK and 27% more than Australia.  However, 14% of America’s population is enrolled in public primary/secondary education, while 13% of the UK’s and 15% of Australia’s population is enrolled.  So, after taking that into account, spending on education in America is actually only 8% more than the UK but 35% more than Australia per student. (Source: UNESCO Institute of Statistics)

What are we getting in results for our spending on education in America?  The Programme for International Student Assessment (PISA) tests 15-years olds on common skills such as reading, math, and science.  American students had an average score of 500 in reading, 487 in math, and 502 in science, while the UK scored 494, 492, and 514 and Australia scored 515, 514, and 527 respectively.  Taking all subjects together, American students performed essentially the same as the UK, and 4.3% worse than Australia.

PISA Data

Compiled from multiple sources

Teachers salaries are the number one issue whenever people talk about education in America right now, as many people associate America’s large education budget on extravagant teacher's salaries.  Contrary to media reports from Wisconsin, teachers salaries are not the reason America spends so much on education compared to the rest of the world.  Teacher salaries in America start at $23K to $37K (depending on state/location) and average out at $32K to $55K.  Compare that with the UK, which starts teachers out at $31K to $38K (after converting British pounds to American dollars using today’s exchange rate), and average out at $50K to $60K.  Australia is similar to America in that is has different pay ranges in each territory.  In Victoria (Australia), teachers start out at $51K and average out around $61K (note that Australian dollars are very close to American dollars at today's exchange rate).  In summary, America pays its teachers 13% less than the UK to start, and 21% less on average. Compared to Australia, America pays its teachers 41% less to start and 29% less on average.

Performance and spending on education in America is roughly on par with the UK, but with lower teacher salaries.  America over-spends and under-performs significantly when compared to Australia while vastly under-paying its teachers.

Education in America costs too much money for the results it is achieving.  However, teacher salaries are clearly not the problem.  If anything, teacher salaries in America should be increased.  Where then is all the money going?  What are other countries, specifically Australia, doing to more efficient in their educational expenses?  Unfortunately, the system of education in America as well as Australia and other countries is not a unified federal program.  Every state (or territory) manages its own educational system, with only general guidance from the federal government.  Additionally, large numbers of private schools also serve to skew the data on students in the public schools.  Many Americans see this as a good thing, generally preferring state level control over federal control.  However, it makes data gathering, analysis, and recommendations for improvement very difficult, as every state in America has different results due to different factors.  The same is true for what works well in Australia.  Every territory has different policies contributing to different results for different reasons.

So there is no easy answer on how to fix education in America.  Distributed state-level programs make the problem that much harder to understand and solve.  One thing is clear however, and that is teachers salaries do not need to be cut to solve our spending problems.


Tuesday, April 5, 2011

What's so bad about compromise?

The political climate in America is anti-compromise.  Democratic voters are angry with President Obama for a series of compromises he has made with Republicans, starting with Healthcare (dropped the public option), followed by taxes (breaks for the wealthy), and culminating with the federal budget.  Historically, Republicans in Congress have never been big on compromise, and voters are warning politicians not to give in now either.  The result is a political stalemate on pretty much every major issue.

Political Parties in the US - from the Wikimedia Commons

It’s hard to recall a vote on significant legislation that wasn’t along strict party lines in either chamber of Congress (refer to voting records here and here).  Congress was intentionally set up with rules that make it exceedingly difficult to pass (or in some cases even vote on) legislation that doesn’t have the support of a super majority.  However, America is pretty evenly divided between the parties which makes super majorities almost impossible to achieve.  Thus Congress ends up being dysfunctional, no matter what party controls it.

Americans love to blame Washington politics for this dysfunction.  But I believe the blame actually rests with Americans themselves.  Recent polls on the budget debate show that 1 in 3 Americans would rather have the government shut down than compromise on their principles.  Within the Republican party, the number jumps to 1 in 2.  President Obama has seen a steady drop in his marks for leadership as he continues to compromise on major issues.  A large number of Americans want their elected officials to stand firm on their principles and avoid compromise.  When elected officials listen, the result is a vote along strict party lines for a biased piece of legislation (at best) or a complete legislative impasse (at worst).

Ironically, most leadership surveys (such as the one linked above) use words like “strong” and “decisive” when asking the polling questions.  A leader who is willing to compromise is seen as weak and indecisive (a.k.a. flip flopper).  The leadership surveys themselves need to be fixed.  Strong and decisive leadership is critical on the battlefield, but in politics it translates into “stubborn” and "narrowminded”.  Leadership of a large and diverse country is not about stubbornly sticking to your personal idea of what is best.  It’s about listening to everyone, weighing everyone’s point of view, weighing outside factors that people aren't even aware of, and arriving at a compromise that does what is best for everyone as a whole.  This type of leadership is not only lacking in Washington, it is actively discouraged.  We can’t blame “politics as usual” for our problems when we are the ones creating the politics in the first place.