Here is a symmetric matrix:
And these are its eigenvalues:
You can click on any element in the matrix and drag horizontally to increase or decrease its value. As the matrix is symmetric, dragging an off-diagonal element will also modify its symmetrically opposite partner.
This diagram displays the eigenvalues graphically. There is a vertical line for each eigenvalue. The horizontal coordinate of the line represents the eigenvalue.
The main point I want to illustrate is that eigenvalues tend to avoid each other. If the matrix is n by n, you have n(n+1)/2 matrix elements to play with, and only n eigenvalues, and yet it is tricky to get two eigenvalues to come out to be equal. Sometimes two eigenvalues get really close and look like they're colliding, but if you look at the numerical values you'll see that what usually happens is that they just get close.
This connects to the theory or random matrices. If the elements of the matrix are generated at random from a normal distribution then the distribution of the eigenvalues is similar to the positions in a thermodynamic ensemble of electrically charged particles in a 1D quadratic potential well. Because of the electric charge they will tend to repel each other. For example see Introduction to Random Matrices - Theory and Practice.
The algorithm used to compute the eigenvalues is the Jacobi method based on the pseudocode at Wikipedia. I think that pseudocode is buggy - for example it will divide by zero for an identity matrix. So my code will probably occasionally divide by zero too.
I've tested this code only on Safari and Chrome on OSX.