Saturday, June 22, 2019

Balloon Battle

I wanted to run a team-building activity this summer in the makerspace, and settled on the format of a robot Balloon Battle.

After the activity I spent an evening learning Adobe Premiere and putting together a video. So this was a mix of a workshop and a video editing project.

Shout-out to M5 staff and volunteers for helping with and participating in this event.

Here's the video!

Saturday, June 8, 2019

Nonholonomic Driving Robots Revisited: RRTs

"Screenshot of RRT"

Back in 2017, before a major geographical move and career change, I was working on a project where I built and wrote an A* based path planning algorithm for a nonholonomic robot:

In the process of doing research for that project, I had come across the concept of Rapidly Exploring Random Trees:

I tested one simple implementation on GitHub (I can't remember which) and thought about what would be involved for using it for my path planning approach. I decided I would need to consider an extension and probably would not be successful with a basic RRT alone. I identified Theta* RRT as a promising technique for solving my particular path planning problem.

But I ran out of time and confidence in my ability and went with the what I already understood fairly well, which was A*, and wrote about it in this blog (but never got around to publishing it on GitHub).

Since I moved in 2017, nearly all my projects involve making makerspaces. Instead of making projects, I help other people create projects. And recently, rather than helping other people create projects, I create systems to help people help other people to create projects. A big theme in my life is institutional sustainability/long term institutional survival/development of infrastructure that is not highly dependent on a single individual's expertise, presence, and commitment. I am trying to automate my job (not that I don't enjoy doing it, but it feels like the responsible thing to do; too many makerspaces and other institutions have serious existential risk posed to them by there being one leader upon whose shoulders everything rests).

 I haven't had too much time to collect my thoughts into a post, though I will keep trying in the future.

I revisited Theta* RRT nearly two years later in the context of a graduate level robotics class I signed up for, perhaps foolishly.

The robotics class was excellent, but a huge time sink. For the first time in two years I wasn't teaching a three hour lab based class once a week, so I figured that freed up about 10 hours a week to spend on this. The professor for this class said to expect to spend 5 hours a week on average on the class, so I figured I was good...

There were 5 assignments due over the 13 week semester. I spent only 15 hours from start to finish on at least two of them (but those were the easiest ones). If they had all each taken 15 hours, 15 hours * 5 assignments / 13 weeks is about 5.8 hours a week. But many of them took me much, much longer than 15 hours. I probably spent 50-60 hours on the Theta* RRT project. On top of that were required and optional readings. These weren't really enforced. I hate to admit I did nearly none of those readings. I was really struggling to balance my job and my desire to perform really really well in this class, and so I tried to limit my time expenditure to only those things that directly would affect my grade in this class. I guess when I was a kid I had to make compromises like 'do well in school' vs. 'socialize and have fun.' But now I make mostly compromises like 'do well in school' vs. 'spend more time on work' and I'm starting to really feel like an adult...

Fast forward to the end of the Spring 2019 semester. I'm trying to finish my final project, for which I've chosen to implement Theta* RRT in Python, and also my final other assignment, which is an implementation of RRTs in C++. I pushed all my other commitments forward and sat at my dining room table working for 15 hours straight on the last day, accidentally causing what might be permanent nerve damage to one of my legs (it has been a month and still I have a large area without much skin level sensation on my shin). I do feel that it was worth it, though if I had known I would have gotten a more comfortable chair which would have prevented that from happening. I would have also found a way to make more time for this project in the weeks prior.

User-uploaded image: Screenshot+from+2019-05-07+19-56-40.png

The rapidly exploring random tree works like this (glossing over details):

There is a starting position and orientation for the car (red, above) and we want it to reach the goal position and orientation (green, above). The car can't steer very sharply; it has a limited turn radius. There are many ways it could reach the goal, but it is impossible to search them all, because the x and y position, as well as the angle of the car, are all continuous real variables, so infinite and uncountable.

So at every step of this algorithm, which we will repeat until we find some path that drives us to the goal within a tolerance, we will randomly choose some x,y position and car angle.

We're going to keep track of paths in this process in the form of a tree, branching from the red car (starting node). We'll identify the configuration in the tree nearest to the new random configuration (note that in the beginning, the only thing in the tree is the starting node).

Then we will try to drive to the new random configuration. We probably can't find a way to drive to the new configuration exactly, but we'll drive towards it and wherever we land, we'll add that to the tree, keeping track of that steering path we used to get there. We'll always check to see if we made it close enough to the goal, and if we did, we'll extract the path from the tree that got us there.

The entire tree is shown in black, and the extracted solution path in blue.

User-uploaded image: Screenshot+from+2019-05-07+19-58-08.png
One of the very lucky trials for the plain RRT. Note how it considered backing up in one iteration, but otherwise explored directly to the goal.

Parallel parking:
User-uploaded image: Screenshot+from+2019-05-07+20-23-10.png

User-uploaded image: Screenshot+from+2019-05-07+20-18-34.png

The Theta* RRT project took longer than expected and I didn't complete everything I wanted to put into it. So I'll leave a more in depth post about it for after I get around to revisiting it and completing all the parts I wanted to complete. You can follow along on GitHub:

Tuesday, January 22, 2019

Rhombic Enneacontahedron Post #2

Alright, here's the code. It is a total mess, but I knew I just wasn't going to get around to even starting to clean it up, so I just committed it before I forget about it (and likely struggle to locate it in the future).

I figured out the degree of freedom I forgot about. When you truncate, you can move new vertices in along old edges some amount, which you can choose. I split edges in three, moving each new vertex in by a third (see the diagram from wikipedia that I included in the previous post). Splitting in three makes sure you create regular hexagons for the soccer ball step (the truncated icosahedron). You don't have to create regular hexagons, and by modifying this you can make slim and broad rhombi, as shown in the images below.

This explains why I didn't match the values for the angles that I found on Wikipedia for the rhombic enneacontahedron. I suspect that most of these rhombic enneacontahedrons people make come from just a few instructional guides or paper folding guides online, which often refer to slim and broad rhombi, so probably they aren't making regular hexagons in the truncation step, which seems to yield very similar rhombi so that you can't easily call them broad or slim.

Extreme cases caused by changing parameter of truncation step seem to be less 'round'

Here's the final 3D printed assembly, after gluing in the magnets. The edge length on the edges the white and red tiles share are 2cm long, for reference

It would be fun to print the exaggerated versions with very broad and very slim rhombi at some point in the future.

Want to make your own? The code isn't super clean, so the steps at a high level:

  • If you want to modify the truncation, you'll need to locate that section of the code, modify the appropriate line, and then get the code to spit out all the info you need on the angles and side lengths that are needed by the OpenSCAD script. There are a bunch of poorly commented areas where you can follow my pattern--I was working quickly and I just didn't make it easy to use yet. Sorry!
  • If you keep the angles as in the OpenSCAD script, you'll make a shape like the one I made. You need to make sure the user controlled side length is the same for the tiles of type A and B. You also need to make sure you match the magnet info between those two types of tile.
  • You can do this with just one magnet per edge if you want to save on magnets, though it won't be as strong and the tiles can pivot until locked into place by neighbors. You will need to go through my code because I didn't make it parametric either
Easiest way to do it? Keep everything exactly the same as my OpenSCAD code, render the STL and slice it for your 3D printer. Get at least 90*4 magnets, 3mm diameter by 2mm height, axially aligned. The magnet diameter and depth values in the script might still need to be adjusted to get a nice fit, depending on your 3D printer.

Wednesday, January 9, 2019

Rhombic Enneacontahedron

Over a year ago I decided the next 3d printed magnetic tile based polyhedron I would create would be the rhombic enneacontahedron.

In the past I created shapes mostly by looking up the dihedral and face vertex angles from the internet. Only for the most recent one, the parametric pyritohedra/dodecahedron script, I generated the points of the whole polyhedron and manipulated them, before creating the individual faces as tiles. For the trapezohedra, some of the ones I attempted to create had the wrong angles, and it wasn't clear to me if the issue was in the angles I had gotten from the internet or if my OpenSCAD script contained some errors. I decided though that the right way forward would be to build the tiles as I did in the pyritohedra script, by creating the full polyhedron and then computing the angles for the tiles myself.

I had just learned about Conway polyhedron notation and considered that the right way to build this would be to use that concept. But I didn't end up getting a library for Conway polyhedron notation, or creating a library for these operators. Instead I did everything a bit less abstractly and elegantly than I'd have liked, but I think that's normal to expect for a first draft.

I switched from OpenSCAD to python at this point, because I find it impossible to do anything with for loops in OpenSCAD (highly inefficient and slow). For creating polygons in 3D space I used pyny3d, which is already integrated with matplotlib for drawing. And I use numpy for math stuff. I finally switched to python 3 for this and wrote every print statement incorrectly the first time...

We begin with the icosahedron:

For the icosahedron I started with a list of 12 vertices as listed on various places online.
Each of the below descriptions of vertices in format (x, y, z) expands into four vertices given there are ± options in two of the dimensions for each one.
(0, ±1, ± φ)
(±1, ± φ, 0) 
(± φ, 0, ±1)
Where φ (phi) is the golden ratio, 0.5*(1+sqrt(5))

The first goal is to truncate the icosahedron. I came up with my implementation of truncation by interpreting the picture and one line description from Wikipedia, which I'll include below, and is drawn on an example cube:

Truncate cuts off the polyhedron at its vertices but leaves a portion of the original edges.[10]

I describe truncation like this:
For each vertex, find every edge is is a part of, and create a vertex one-third of the way inward on that edge. The new shape's vertices are the newly created vertices.
The new shapes faces I separate into two categories.
The first category are created by drawing edges between all the new vertices spawned off a single old vertex. In our case, the icosahedron has vertices at 5-edge intersections, so these are five sided shapes (pentagons).
The second category are the truncated versions of the original faces. For each original face, consider each original vertex. That original vertex spawned some new vertices, but only two lie on this original face that we are currently evaluating. The new face that will replace the old face is defined by all of these new vertices corresponding with the old face, that still lie on this face.
We had triangular faces with the icosahedron, so with two vertices per original vertex multiplied by three original vertices, we have created six vertex faces (hexagons).

The truncated icosahedron is a soccer ball, shown above in red pentagons and blue hexagons.

Now we must run a join operation on the truncated icosahedron. Again, expanding from only the short wikipedia entry:

Join creates quadrilateral faces. 

That's not a lot to go on... To better understand it I considered a similar operation, kis:

Kis raises a pyramid on each face

Okay, so join is like kis without the original edges. The old vertices can stay put, but the new vertices, created in the middle of each face, need to be lifted. But by how much to lift them? And where to put them to center them on the face?

I had gone over the math for finding the vertex on the face, when I went over the math to define the dual operator. Luckily for me the pentagons and hexagons are regular, so I could just take the mean of the points in this case.

The insight for how to lift the vertices came from the symmetry. Some of the newly created quadrilaterals will go from one hexagon center point to another hexagon center point. To preserve symmetries, I assumed that all hexagon center points will be lifted the same amount. I drew a quick diagram to figure out the computation for how far to raise them in order to make all four points coplanar (otherwise, we don't have a valid face. At time of writing, this site which I enjoyed using in the process of learning about Conway operators, seems to have some nonplanar 'faces', so it looks very strange).

New quadrilaterals connecting old hexagon centers to old pentagon centers must also be created, utilizing two old vertices that come from the edge shared between the adjacent hexagon and pentagon. Since we finalized the height of the point raised off the hexagon center, and we know the two points from the shared edge are the same as they were before, the only thing to do is compute how far to raise the pentagon center along the norm of the pentagon face until it lies on the plane defined by these other three points.

To draw the faces I needed to keep track of which faces were adjacent to which other faces, so I created some dictionaries and populated them with simple for loops because I'm not concerned about optimizing my code at all given what I'm using it for.

The next step was to compute and print out the angles I needed to create a tile. This was just a question of identifying the right vertices, edges, and vectors and computing various norms and cross products.

For the tiles I moved back to OpenSCAD. I reused my rhombic triacontahedron tile code for the parallelogram faces (red). For the kite faces (blue) I created some new, but very similar code.

Here's the info printed out, there's definitely going to be some rounding error but not significant to my purposes.

There are two dihedral angles:
Red to blue faces: 159.4455573432933 degrees
Blue to blue faces: 160.81186354627906 degrees

The red parallelogram faces have an acute interior angle: 56.67993474152847 degrees
And the complement (computed just to check if correct, but not needed as additional info): 123.32006525847153 degrees

The blue kites are more interesting. The skinniest angles are the tips of the five pointed stars they create.
56.67993474152846 degrees

The opposite angle, which lies in the middle of those stars where they meet in groups of five, is close but certainly not the same angle:
69.7316519531762 degrees

It is a kite, so symmetric in this remaining dimension, and both of these larger angles are the same:

This didn't seem to match the info on Wikipedia and I'm also not seeing such a clear distinction between 'slim' and 'broad' rhombi. I wonder if I did something wrong, though so far everything is internally consistent for my construction. I wonder if there's an assumption I made that got rid of one degree of freedom....

Finally I needed some info about the edges. The red edges are all the same length. And, they match the length of the slightly longer of the edges of the blue tiles. The other type of edge is the ones that the blue tiles share with each other. I call this a 'short edge.' 

I computed a long side: 0.4339800705873414
And a short side: 0.36037602329028573

And the ratio of short to long as their ratio:

I ordered a few kinds of magnets in sufficient quantity (I need 90 faces * 4 magnets a face, but I might also double up and do 2 magnets an edge or 8 magnets a face). While I wait, I am 3D printing some tiles just to tape together and see if the angles work. So far it is looking promising.

parallelogram faces model, shown with magnet holes

 kite faces model, shown without magnet holes

They look fairly similar to the untrained eye...

Taped together with masking tape as a parallelograms and white kites. They were all printed in white, and then I used a sharpie to color the parallelograms in red.

A five pointed star of kites.

Since these come together nicely, I think I have all the vertex angles and the dihedral angles correct.

I've always used either masking tape (testing) and magnets (super-glued into the slots) to construct these, but somebody who printed one of my designs and tried gluing them together noticed that there appeared to be an accumulating offset problem that grew as the shape was constructed and made it so it wouldn't close. It seems that you need to leave some flexibility in order to distribute errors (that come from the manufacturing process or mis-alignment during assembly). Some ideas I haven't tried include: putty, velcro.

Once I've had a chance to build the whole thing, I'll follow up with another post, and clean up the code and commit it to GitHub as well.

Monday, December 3, 2018

Joystick Color Wheel with 3 Op Amps

I love microcontrollers but I've seen one too many 'Raspberry Pi deployed in order to blink a light' projects. Don't they know you can do that without a computer? They might not know.

I was sitting at the hardware table at HackUMass and watching everybody check out Arduinos and Raspberry Pis and ignoring the transistors. So I thought I'd make a few simple circuits for demonstration. First was a simple flex sensor controlling an LED. Then I inverted the behavior--the flex sensor turned off the LED instead of on. Then I did the same sort of thing with the potentiometers from a joystick. It made sense to upgrade to an RGB (red green blue) LED. But there are two potentiometers (and one switch) on the joystick, and three colors in the LEDs. What kind of behavior would be most satisfying?

It was suggested that I implement a classic colorwheel. Three axes, set apart 120 degrees from each other, for red, blue, and green.

Okay, perfect--I can do that with some op amps. There are two axes (potentiometers) on the joystick and each is configured as a voltage divider. We need to make a weighted sum of the X direction and the Y direction outputs of the voltage divider to create the Blue and Green directions. The red is aligned with the Y axis already.

I went through a few plans for the design. I ultimately settled on the LM358N chip using a single-sided supply and a virtual ground. The virtual ground I set to half Vcc with a simple voltage divider, guessing that the joystick rests at half Vcc (might not be completely true).

The Blue direction sits 30 degrees below the X+ direction. X*cos(30) + Y*sin(30) implemented in a summing amplifier--that's the first op-amp. For the Green, I used the same calculation, but flipped the X axis using an inverting amplifier, so that takes two more op amps for the Green axis. There are two op-amps per LM358N, so that's two ICs.

Each axis controls an NPN BJT transistor (I chose the 2N4401 for this). The red axis is the output of the joystick Y axis voltage divider controlling the transistor. It looks like it would go in the wrong direction (-Y) but because the summing amplifiers also invert the value relative to the virtual ground, everything ends up working out.

Finally there are some potentiometers inline with the base resistor to allow calibration of the three color channels. I found I got best results when the lights are all on and balanced for a medium white in the middle default joystick state.

This description wasn't very detailed, but it isn't a tutorial. I'm hoping to create a series of high quality instructional resources in the future, and I'd create a module on this circuit as part of that. It takes a lot of time to create high quality content though and I just don't have time at this moment. So for now, if you want to make one, I'll leave the details of implementation as an exercise to you, the reader.

Finally here's the video. I put a piece of plastic on the RGB led to get the diffuse light, because it had a clear package. That made it much nicer to look at, but the video still suffers from poor dynamic range.

Wednesday, November 28, 2018

Ubuntu won't boot, waiting on dev-mapper-cryptswap

I updated Ubuntu 18.10 cosmic and of course everything was crappy. The trackpad started registered spurious touches of my palm in the upper left corner and made typing very frustrating. And then the machine started hanging with a weird window manager (GNOME?) glitch after I had it in sleep mode all night with the lid shut, so I had to hard reboot in the morning.

This happened twice and then it wouldn’t reboot anymore. It would hang on the purple screen with the word 'ubuntu' and some loading bar dots. I force rebooted (hold shift for GRUB) and selected the option 'Advanced' and chose a recovery mode option of the latest version. Then I could see that we were endlessly waiting on dev-mapper-cryptswap1.device and that is why it will not boot, not even in recovery.

I found several sites suggesting I edit the /etc/fstab file and comment out any lines talking about cryptswap. (to do note for myself for later: figure out if I should make an encrypted swap, or just live without the swap)

OK but if I can’t boot, I can’t access any shell! How do I edit the fstab file??? I found this answer:

But I was missing the context for it. Where do I enter that emergency boot parameter?

I was getting sick of typing google search queries in to my phone, so I went back to Grub and looked around. In the advanced options section, the text suggests you can press ‘e’ to edit the entry. Then I saw something like this:

setparams ‘Ubuntu, with Linux 4.18.0-11-generic’
    gfxmode $linux_gfx_mode
    insmod gzio
    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
    insmod part_gpt
    insmod ext2
    if [ x$feature_platform_search_hint = xy ]; then
      search --no-floppy --fs-uuid --set=root [uuid]
      search --no-floppy --fs-uuid --set=root [uuid]
    echo ‘Loading Linux 4.18.0-11-generic …’
    linux /boot/vmlinuz-4.18.0-11-generic root=UUID=[the uuid] ro acpi_rev_override quiet splash $vt_handoff
    echo ‘Loading initial ramdisk …’
    initrd /boot/initrd.img-4.18.0-11-generic 

So that seemed promising. I entered a -b into the /boot/vmlinuz arguments, as such:

linux /boot/vmlinuz-4.18.0-11-generic root=UUID=[the uuid] ro acpi_rev_override quiet splash $vt_handoff -b

Then hit control + x as the instructions suggested to boot. This doesn’t permanently change the options, but rather boots with this modified entry just this once. So I entered emergency mode, hit control + d as the instructions suggested, and I was in.

Back to the instructions from the answer from before:
In case this leaves the root file system read-only, you can run mount -o remount,rw / once in the shell.
I didn’t give this a try without doing that; I just assumed it was necessary, and that had I skipped that I would have found that I was looking at my file system in read-only mode.

So back to
The solution is to remove or comment out the "cryptswap" entries from /etc/fstab and /etc/crypttab. This can be done easily by editing the above mentioned files as commenting out the lines that say cryptswap by placing a "#" in front of the matching lines.
I did that, saved the file in nano (I forgot how to use nano, but the bottom of the screen suggested some commands, so I did the one for exit and then it asked me to type Y to save before exit). Then I restarted the computer, I believe with the command shutdown, and then pressing the power button afterwards to reboot.

Then I figured before I got back to work, I could make a short blog post out of it. Here you go.


Tuesday, June 5, 2018

CRT and Magnets Exhibit

This Cathode Ray Tube + Magnets exhibit started off with a couple of CRT monitors that were gathering dust in storage. I was asked to consider putting them out in the main space of the makerspace. I decided I would only allow this if they did something. I set out to decide on what that something would be. It turned into a fun, easy, accessible to all ages exhibit that we now turn on for all the tours we lead through this makerspace. It is a great way to quickly and cheaply construct a meaningful interactive science exhibit to add to your collection.

Here's a document I put together to explain what's happening. I taped this to one of the TV antenna (the antenna is not used) so it stays front and center to the exhibit and people are encouraged to actually read it. Here's a lower quality image of the same document so you can see it embedded in the post.

Here's the bill of materials:

  • CRT television. You may have to turn to eBay or Craigslist. These may only get harder to acquire with time.
  • If the CRT has VHF/UHF inputs, you'll need a box like this one to convert the signal to composite video:
  • For the camera, the backup camera is cheap and outputs a composite video signal over RCA connectors.
  • If you'd like to place the camera elsewhere, you can get a cheap 2.4GHz transmitter like this one
  • A powerful neodymium magnet. You want something strong enough to have an effect and a good size to be easy to handle. Something like 0.5 inch diameter and 0.5 inch height seems like a good size to me, but I'd suggest just seeing what's available and trying it. You can always stack up multiple smaller magnets.
Assembly is just a question of plugging everything in to power and getting the signal in to the TV. Note that the backup camera requires a little work to plug in; it is designed to be hooked up to a 12V car battery connection point. You will need a 12VDC wall adapter if it does not come with one. You will need to make sure you have the red/positive connector and black/negative connector going to the right places. You will probably need to solder at least one connection, or use another strategy to connect the wires.

The magnet should be protected with something soft to avoid it hitting against metal and breaking or pinching fingers. I used two furniture feet and some masking tape. I also suggest putting it on a string so it doesn't wander off.

Cable management was the longest part of the project. I zip-tied all the extra length of cabling in back of the assembly. A single switch controls the extension cord to which the entire unit is plugged in. Note the CRTs make a high pitched noise that some people don't like to hear all day, so I don't leave it on all the time, since the main room of the makerspace is also a meeting and study space.

The cameras are taped to the top of the CRT and pointing at brightly colored pieces of paper. This is important because the magnet effect is not nearly as visible on black & white images.

An additional, optional modification I made to the back up cameras was the removal of the infrared lights that it contains in order to provide better visibility at night. The camera was very warm and when the IR lights activated, which sometimes happened if the exhibit experienced low light conditions, the image was washed out. I opened the unit and de-soldered the IR LEDs. My original plan involved putting gaffer's tape over the IR LEDs, but that seemed to make the camera heat up even more. There is a sensor inside the unit and in bright lighting the infrared LEDs are not activated, so this is not necessarily something that needs to be addressed for the exhibit to function.

It is also important to note that the backup camera is designed to mirror images. Note the sign in the image above is printed as a mirror image in order to show up correctly on the display. The backup distance overlay is another artifact of the choice to use a backup camera; I find it is fun and adds to the color distortion effect since it is displayed in bright colors.

Wednesday, June 7, 2017

Indoors Navigation Robot ("Driving Drawers") Post 2

This post is about the path planning algorithm for the robot.

Because the robot is a non-holonomic, non-radially symmetric, non-zero turn radius type of vehicle, the path planning problem was fairly difficult.

I was struggling to find a readymade solution to the following problem: plot a curve between obstacles, with a constraint on the sharpest allowable curvature in the path.

For a setting with no obstacles, with a minimum turn radius specified, and where starting and ending state are specified as coordinate and angle in 2D space, I found Reeds-Shepp, which was already implemented in Python on GitHub. Curves

I briefly considered using Rapidly Exploring Random Trees to search for solutions in this problem space, but decided on something different instead after reading some literature and testing some code. I think some of the variants I've seen published of RRTs would work nicely, but I didn't want to implement them in Python from scratch. Two repositories I looked at when making this decision:

I decided to go with A* for search. To prepare my problem for that, I first discretized the map of obstacles (represented by polygons) using a visibility graph. I built the visibility graph using another Python repository pyvisgraph. I spent several days hunting down bugs in the code, coming up with one case of rounding error and one case of logical error that were causing problems in my setup. I opened up a few "Issues" through the GitHub portal and the author will be able to fix them. I considered doing a pull request but for just a few lines of code like that I figured the Issues would suffice. After fixing the bugs I provided support for interior polygons and adding obstacles to the visibility graph without recomputing, so I guess I should probably clean it all up and do a real pull request eventually.

I also had to discretize the robot angle at this point. I decided to go with 4 of what I called 'cones': North, East, South, West. The code has this parameter as K, so I call them K-cones (you can just as easily specify five cones, but then they can be harder to refer to casually when describing the solution). Since Reeds-Shepp takes in an exact angle, when computing the cost of a path from (start point, start cone), to (goal point, goal cone), I try all four combinations of clockwise and counterclockwise limits of the start cone and goal cone. Then I take the worst score and use that as the cost. Paths intersecting the obstacle map are ignored in this step, so they don't count towards the concept of the worst score.

Then I searched over this space with A*, and found solutions in the form of a list of (location, cone). But to resolve this into a navigable path still required resolving the cones (North, East, South, West) into angles. If there are n steps in the path, and we choose to sample d regularly spaced angles within a cone to find the best overall path length, then the size of the space is d raised to the power n. It isn't easy or necessarily possible to make n any smaller than it already is--so keep d small. I found that even for d=2 (checking just the most clockwise and counterclockwise limit of the cone) could find a solution. The difference between best total path length found were minimal anyhow.

I used Shapely to handle geometry related tasks like union-ing polygons together, checking for intersections, and so forth. I used matplotlib for drawing things to the screen.

You had to read all that, so here are some pictures!

An example solution after the angles have been resolved.

An easy solution, where the goal is immediately reached from the start point with a single Reeds-Shepp curve.

Here's a messy picture of all the edges in the graph for a particular setup. The pink things represent the cones.

This picture that I saved at some point in my work represents paths from points on the visibility graph that intersect the walls of this empty room. The intersecting sections are shown in red.
The strongest thing I can say for my hacked together system, "A* over Reeds-Shepp with K-Cones" is that is seems to work in all the cases I need it to work. I can't say anything about the optimality of the solution or even about whether a solution will be found if it exists. Computing the visibility graph and edge costs on the visibility graph when expanded for the 4 cones on each point takes about ten seconds on my laptop, for what that is worth. It can be saved and reloaded for future use, assuming the obstacle map hasn't changed.

I'm taking a break from this project to prepare for the Advanced Topics in CS class that I will be teaching this summer (high school grade levels, but college level materials). When I come back to this project, I will take the hardware from the last post, the path finding from this post, and a few bits of math I've worked out in my notes, to put the whole thing together into a robot that navigates from room to room using ArUco tags

Indoors Navigation Robot ("Driving Drawers") Post 1

[Foreword: you may notice the fonts are mismatched in this post. Or maybe you noticed that the images in all the posts in this blog, when viewed on a larger screen (as opposed to a small mobile device screen), are placed haphazardly within the body of the post. This is because Blogger leaves much to be desired when it comes to the WYSIWYG post editor. In part because WYS ("What you see") is not always WYG ("what you get") in the context of Blogger's post editor, but also because there are very limited options for aesthetic image layouts in the body of a post. I am planning a Jekyll based webpage for this blog in the general future that will fix these problems. Until then, this is a cheap and easy way to make sure I keep a blog at all.]

I had an idea for a chest of drawers that would drive to my location in a space (assumes no stairs).

Here's the first part of my progress on that project.

Physical Build

I started by dissecting my collection of Roombas and taking the wheels from two different models and hot gluing them, along with some support material, to the base of a plastic chest of drawers.

I purchased a Raspberry Pi 3 and a PiCam, and while I waited I put together a 3D pan-tilt design from Thingiverse. I'll provide a link (link to thing), but keep in mind I do not recommend this design. Unless your 3D printer is very precise, you'll have to do quite a bit of filing and sanding to get it to go together. The pan servo will absorb all the impact whenever it hits something, and will fail (mine did). If you can find a ring of the right thickness and diameter to stick in between the orange disc and blue plate in the photo, the problem is mitigated (I used a bearing I found in a bin--way overkill as it isn't even acting as a bearing--but a quicker solution than cutting my own ring on the lathe).

Not entirely certain of the layout I wanted, I just taped everything in place with masking tape. The battery is stored underneath, in the topmost drawer of the robot. It's a 30,000 mAh battery I bought for use with my smart phone. It has a port that will source 2.5A, which is needed by the Raspberry Pi. I paid about $75 for this model; you should be able to find comparable batteries by other brands if that one is not available (sometimes, when an item is out of stock, a few vendors will offer it for an inflated price, so beware. The price on Amazon for this model was briefly $399 before dropping to $69.99 again). 
I pulled the Arduino Mega out of a project I'm not working on at the moment, though it is of course overkill for this application. I wasn't sure how many sensors and actuators I wanted on board, so this 54 I/O pin Arduino allows for quite a bit of room to grow the project. The Raspberry Pi 3 itself only has one PWM enabled pin available to me, so the Arduino is convenient for handling all the low level stuff. It talks to the Raspberry Pi over the USB. The micro servos are powered from the Arduino Mega which are in turned powered off the Raspberry Pi. The micro servos stall current is low enough for this to be possible with the Arduino Mega.

 The Roomba wheel motors are safe at 12 volts (the Roomba battery voltage), so I put another battery in the system just for them. The battery is a 3 cell Lithium Polymer battery, which measures in at roughly 11.1 volts when the battery needs to be recharged and 12.6V when the battery is fully charged. The motor drivers are L298 chips on those red-colored Sparkfun breakout boards, with the heatsinks mounted to them.

So at this point the robot was driving, but only in a straight line. Turns would drag at least one wheel and make a terrible noise. Only very slight turns worked. This was fairly predictable, but trying to make it work anyway was very much in keeping with my glue-and-tape, then iterate style of prototyping. Jesse helped me put together a steering mechanism in a very short amount of time. It worked, so that evening I took the robot out for an inaugural journey around Sector67, using it as a sort of telepresence robot as I controlled it from my desk.

Then I broke the steering mechanism gearmotor by switching it back and forth too fast when I got stuck in a corner. The gear before the final output shaft broke into a bunch of tiny pieces.

I replaced it with the gearmotor in the image above on the right that has the partly blue casing. Now that I had a working robot again, it was time to work on the high level path planning and code. I'll put that in the next post.

Sunday, May 7, 2017

FRC Machine Shop with Sector67

In April Sector67 volunteered to run a machine shop for both FIRST Robotics Seven Rivers Regional and St. Louis Worlds events. It had been a while since I participated in FRC in any form (though I did mentor an FTC team this year, which is another FIRST competition).

Here are some photos. Since I only took photos when there was a break in the work orders coming in to the shop (usually because there was some mandatory attendance event for teams), this photo set gives the impression that we were a lot less busy than we actually were.

Seven Rivers Regional one view of the machine shop

Seven Rivers Regional game field

Teaching lockpicking at the table photo left

St. Louis Worlds Machine shop front side, 3D printer and laser cutter with fume extractor

More stuff in the St. Louis machine shop setup

View of the St. Louis playing fields in the convention center arena