Intel's Ronler Acres Plant

Silicon Forest
If the type is too small, Ctrl+ is your friend

Friday, February 3, 2017

Thinking

Posthip Scott sent me a story about some guys who were going to fix the WiFi connection problem using 'machine learning', which I suspect is a primitive form of AI (Artificial Intelligence). So I'm reading this story and they are talking about what goes on when your computer (or phone) tries to make a WiFi connection. They run through various scenarios and then instead of tackling the problem with logic, deduction and reasoning, they dump it on a machine learning system. That system runs through various scenarios and 'learns' which sequences are able to establish a connection the quickest.

To my mind this is being lazy. The problem doesn't sound that hard. If these guys just applied themselves they could probably work out a efficient design without having to resort to this 'machine learning' stuff, however involved, complicated or resource intensive it is. ('however' means that while I may have my suspicions, I have no idea.)

Steam Train to Hell
Then I realized a couple of things. One, using the 'machine learning' technique allows them to throw a popular buzzword around and let people know that they are on the band-wagon (full-steam ahead to Armageddon).

Two, the problem is kind of mundane and kind of time consuming to solve. You've got a bunch of things to try. Certain sequences will work better than others, but the only way to find out is to try it, and as you know, establishing a new WiFi connection can take time. If you only have to do that once or twice, it's tolerable, but imagine having to do it a thousand or even ten thousand times. Ain't nobody got time for that.

Fire up your 'machine learning' program and let it try to make a connection a couple of thousand times. That might take a week, but since nobody needs to sit there and watch it, you can work on other things. Presuming, that is, that the 'machine learning' program itself is reliable. Still have to use good old testing by humans to get that part down.

Three, the problem may not actually be amenable to solving by human logic and reason. There are some problems that at first are very simple, but then you add another set of conditions and it gets a little more complicated. Add another set of conditions and the problem becomes complex. Add another one or two and all of sudden the problem requires a super-computer running for a month of Sundays to solve. (Actually I wonder about that. Do we have any working computer programs that are designed to solve real-world problems that would take six months of run-time on the government's weather-forecasting computers?)

I was trying to think of a real-world problem that could be used to illustrate this idea. I mean with all my experience you'd think I'd be able to come up with something, but I quickly latched onto the Traveling Salesman Problem and while it is not exactly what I was trying to convey, it will do.

The Traveling Salesman Problem briefly involves plotting the most efficient route for a person to visit some number of cities. If there are only two cities the problem is trivial: you visit one and then you visit the other. Likewise, it the cities are all on a line like, say, I-70, the solution is easy: start at one end and then drive to the next city on the line.

It's when you have several cities scattered across the state that the problem becomes difficult. If you have four cities you have six possible routes, if you have five cities there are 24 routes, six cities means there 120 routes. By the time you get to ten cities you have more than a quarter of a million routes. A person could solve the six city problem, and any smart phone could solve the ten city problem in less than a second, but bump up the number of cities to a hundred, which might be rural salesman's monthly tour, or a bit city salesman's annual tour, and ain't nobody got time for that.

As I recall, Knuth wrote Algorithms and and Data Structures, one of the textbooks I used in school (actually, Donald E.Knuth wrote The Art of Computer ProgrammingNichlaus Wirth wrote Algorithms and + Data Structures = Programs. Whatever.)  Knuth has been writing algorithms ever since. He has something like a zillion algorithms for doing I don't know what. Everything that could possibly be done by computer, I suppose.  In Computer Science they teach you some basic stuff, like the difference between an address and its contents, how the contents of a memory location can be interpreted as a letter (an alphanumeric character), a number, or as address to another location (a pointer!). Then you move up to linked lists (or down into machine code). The next step up is algorithms. The only one I remember is Quick Sort, and I'm actually a little hazy on how it works. We went over half a dozen inferior sort algorithms before we got to Quick Sort, but once I got out of school I didn't use it until I started programming in C, and once you get there you don't need to know it anymore because it's part of the standard C library.

The point is that there might already be an algorithm to solve this problem, but how would you know? How could you possibly find out? And would you be willing to spend the $200 necessary to buy the book that explains it?

So 'machine learning' may not be all that special or miraculous, it's just another tool that we can use to bend the machines to our will.



Last year or so I've been playing on Codingame.com, working on writing programs to solve various puzzles. A number of their puzzles involve a rectangular grid, roughly the size of a chessboard, and solving these problems often involved using recursion. Recursion, for me, has always been a bit tricky. I can often see what needs to be done, but translating that vision into variables, and then knowing what question to ask of that variable, can be difficult. If the answer does not appear immediately, stewing on it for a couple of days will generally produce a solution.

But all these problems are so similar! I should be able to design a structure that could be used for all of these problems and so enable me solve the problem sooner.

But then I got some other ideas that I wanted to pursue, and building a general purpose recursive structure didn't seem all that important.

The curious part is that I don't have, or am not aware of having, a procedure for designing recursive procedures. I could tell you what is going on in any one problem. It's complicated and would take some effort to record that. But to make any use of that, you would have to make recordings of the process I used to solve other recursive problems, and then compare them to see if there is, in fact, any similarities.

Maybe machine-learning is the way to go.


2 comments:

Ole Phat Stu said...

Afaik, simulated annealing will get you solutions to the travelling salesman problem that are within an arbitrary percentage of the optimum, i.e. good enough to all intents and purposes.

Chuck Pergiel said...

Looking at simulated annealing, it seems very similar to how quantum computers are supposed to work. Both seek to minimize values at multiple nodes. I'm not quite sure how simulated annealing does it, but quantum computing does it with magic, as far as I can tell anyway.