Friday, December 16, 2022

Wire loop game (2018 project that I forgot to post)


the wire loop game

(there was a video here; I'll try to put it back later. It got lost when I tried to fix the formatting. Blogger seems kind of dead. I should migrate this blog...)




(I forgot about this one! I did half the write-up when I wrapped up the project in 2018 and then forgot entirely about it. here it is now...)

This is an implementation of https://en.wikipedia.org/wiki/Wire_loop_game

Game behavior:
To start the game, the player touches the loop to the wire on the starting post (in red). This starts the counter over from zero, and counting starts when the loop comes off the starting post.
If the player touches the loop to the track, the player loses (indicated by red light and buzzer) and the timer shows the final time when the game was lost.
If the player touches the loop to the wire on the goal post (in blue) then the player wins (indicated by green light and for now, buzzer [but was intended to be replaced with a simple sequence of tones]) and the timer shows the final time of when the game was won.


The physical platform
Goal posts modeled in OpenSCAD





Attached with hot glue, PVC pipe to a wooden platform. The wire is some salvaged coaxial stuff, if I recall. It is pretty rigid but can be deformed into new shapes for a different difficulty of game.

The counter and timer

I picked out some 1” tall seven segment displays and the CD4017BE which is both a counter and a seven segment driver. I used rainbow colored ribbon cable and female 0.1” pitch header pins to create connectors that were easy to trace, since there are so many connections needed.




I decided on 4 seven segment displays: 100 milliseconds, 1 second, 10 seconds, and 100 seconds. The 100 seconds is a little extraneous, since most people don’t take that long to either fail or win the game. The CD4017BE units have a carry out pin, so I only need to feed a 100ms clock to the first unit, and then put the carry out of that unit to the clock on the next unit, and so on.





For the 100ms clock I debated using a high frequency crystal and dividing it, to achieve good precision in timing. But I ended up going with a 555 timer. The circuit is built as a 50% duty cycle oscillator as seen in this tutorial.



The ~100 millisecond clock


I also made the choice here to use a 4 cell AA battery pack, which for alkaline batteries maxes out at about 6V for fresh batteries. I was sticking with a theme of using older logic ICs that were rated for 5-15V or wider ranges. A couple more modern devices I looked at briefly had maximum recommended voltages in the range of 4.5-5.5V, so this eliminated some options.



I added a trimmer potentiometer to help dial in the frequency, which will drift with things like ambient temperature. I had fun dialing it in to 100ms though it definitely drifted as I added to the output of the circuit.

The state machine

Here I took a moment to design the state machine for the system. I knew I wanted indicator lights and indication sounds for both winning and losing. I came up with the following list of states: Idle, Active, Fail, Win. I simplified this further into two categories; game active, and game inactive.

When the game has started, the timer should count up. When the game is over for whatever reason (win or lose), the timer should stop counting up and flash the numbers on and off to make it obvious to the user. I dropped the idea of Idle state as optional and not worth implementing, because as I imagined it originally, it was only used between the time the user turns on the system and the time the game first starts.

I wanted a press for either the fail switch or the win switch to end the game, so I added a CD4071BE.

I picked a JK flip flop to implement this very simple state machine. I found we only had one type, the CD4027BE. There are two JK flip flops on each chip.

[diagram that i didn't make but did describe as follows: J goes to Win OR Fail. K goes to Start Switch. Reset goes to Start Switch. Qbar goes to Game Active] [I also noted to myself to check this is correct]

When the start switch is touched, the game state is reset to active. However since the reset also is tied to the reset of the seven segment drivers, holding the start switch closed will keep the timer held at 000.0 seconds. When the start switch is removed, the game is active. When the user hits either the fail or win switch, the game goes back to inactive. The clock for the JK flip flop is another 555 timer set to a 1 millisecond period, because that was fast enough that I could not purposefully brush the wand against the copper wire fast enough to make it not register.

At this point, I created the other parts of the game ‘switch.’ The main maze/course is the ‘fail’ terminal. A small loop of wire on the start post is the ‘start’ terminal. Another small loop of wire on the ending post is the ‘win’ terminal. Finally there is the loop of wire with a handle that the user manipulates. To successfully solder to the copper wire required a lot of heat (I used a Metcal and took my time) and a lot of flux.

At this point the game was playable and people enjoyed it.

Unfinished notes

At this point the writeup is a little sparse...It appears that I was not diligently keeping notes as I worked and was having to go back and recreate my thinking... here it is unedited anyway:

[remind yourself to look into what that second jk flip flop was used for]

The state machine must know whether it is inactive because it failed or because it won

Fun with BJT transistors, describe that here

Turning on and off the dot with the other segmentsTrying to make sure that at power on the segments don’t bounce up one clock, and that hte green light does not default on, involves caps, but as i added more circuitry sometimes i needed to change those.
The fail buzzer
Differentiating input for the 555 timer controlling the note length

Dealing with needing negative input and suppressing wrong way spike

Trying to debug the power draw issue

Deciding on how to disable the buzzing after timer is done. (by disconnecting speaker from ground on the other side)

I'm posting this now without finishing up my notes--Better late than never.

The intent of the notes was to consider turning something like this into a partially guided class project. There was this idea that some students get lost in the world of all possible projects and prefer a menu or some framework to work from. The internet is too broad a menu and sometimes students pick projects that aren't appropriate to the scope of the class. If a student needs to pick a fun digital logic project, maybe it is easier to look at a list of five or so ideas, and those with strong opinions and creative drive can mix and match and remix and propose a new thing entirely if they feel so inclined.

Once a student connects with a project like this, they still have a lot of creative direction available in terms of the features they choose to implement and how they choose to implement them. For example, somebody else might want a minute timer (not a 100 second timer). Or to add the unimplemented winning noise. Or the flashing score at the end of a game idea. Or to use a more precise timer to drive the time counting.

Wednesday, April 8, 2020

The Polyhedra Project (video summary)

I finally got some time to edit old footage...and build a garden...and breathe, think, reflect.

Here is a video summarizing my parametric 3d printed magnetic polyhedral assembly tiles. 

About 8 hours in Adobe Premiere to do the bulk of the work, then a few hours to make final edits and export it.



Sunday, December 15, 2019

Endless time sink projects!

It has been a difficult but exhilarating semester.

We made a makerspace! Here it is all packed up because we needed to move it to another location last week.



Thinking about the makerspace ate up all my waking hours since it opened. After a few months I started to feel unsettled by how little other identity I had left. Makerspace director is an all-encompassing role in two senses--having to do nearly everything related to it, having everything you do be related to it. You only ever get to make one thing, the makerspace.

But at the end of the semester I have found a little time to think and regroup. I need to make things in addition to making the makerspace. I'm not one to do anything casually enough to call it a hobby. So... I've started up the Blender Graphic Novel project again.

Because it seemed like complete and utter folly to reopen that project (another infinite time sink), first I tried lots of things, like watching movies and going out to eat with friends. But nothing else seemed to plug the hole.

Comic from Dresden Codak

So, just like with the makerspace project, I try the impossible thing anyway, then it doesn't always go well, then in private I cry and get angry, and then I use that anger to rhetorically dare myself to give up, which I will never do, so I swear stubbornly to try again, and the cycle repeats. And sometimes on a night like tonight I even make some promising incremental progress.

Incremental progress. Just testing out a few concepts

What I've learned in re-opening the project for the first time since 2016:

I'm still relying heavily on Blendswap for assets, but it feels like that website/community has lost some momentum. The environment above is from https://www.blendswap.com/blend/10299
Most of the assets I had found in the past still seem to be there so that's good, but the search function is pretty bad and all my Likes/Collections from before are now all empty.

In 2018 Manuel Bastioni decided to try raising funds for the MBLab project, but after about three days he gave up and shut the whole project down. It seems he expected a large audience to materialize (with not much thought given to advertising) and that most of them would immediately donate. The project is now on Github https://github.com/animate1978/MB-Lab

Marvelous Designer is up to version 9 from version 6, and it seems the same. I'm a little frustrated with it but at the same time it is the only tool that does what I need, and it gets good results (but slowly due to clunky workflow).

The sewing pattern community is still disappointing. Everybody seems only interested in making a few bucks here and there for the patterns in their collection. Very few community seems really attuned to concepts like public domain, digitally archiving historical patterns, getting into the nitty gritty of patent, trademark, copyright law, first sale doctrine, and understanding how it does and doesn't potentially affect sewing patterns and fashion. I wanted to try paying for ready-made Marvelous Designer garment assets but found very few for sale.

Blender Internal is gone in Blender 2.8 and I'm using Eevee instead. Freestyle still works, and is still pretty slow. Eevee is much faster than Cycles and I really don't need raytracing for the style I'm going for. I can use viewport rendering which helps workflow, so long as I keep the number of vertices sane.

Big directions to explore next:
- Hair...still difficult. Need to practice a bit.
- How to effectively storyboard so I don't waste time on scenes and assets I don't need, and so that the pages and chapters are cohesive (visually, and from a storytelling perspective)

https://www.reddit.com/r/funny/comments/eccj2/how_to_draw_an_owl/

Saturday, June 22, 2019

Balloon Battle

I wanted to run a team-building activity this summer in the makerspace, and settled on the format of a robot Balloon Battle.

After the activity I spent an evening learning Adobe Premiere and putting together a video. So this was a mix of a workshop and a video editing project.

Shout-out to M5 staff and volunteers for helping with and participating in this event.

Here's the video!



Saturday, June 8, 2019

Nonholonomic Driving Robots Revisited: RRTs

"Screenshot of RRT"

Back in 2017, before a major geographical move and career change, I was working on a project where I built and wrote an A* based path planning algorithm for a nonholonomic robot:
http://buildingfriends.blogspot.com/2017/06/indoors-navigation-robot-driving.html

In the process of doing research for that project, I had come across the concept of Rapidly Exploring Random Trees:
https://en.wikipedia.org/wiki/Rapidly-exploring_random_tree

I tested one simple implementation on GitHub (I can't remember which) and thought about what would be involved for using it for my path planning approach. I decided I would need to consider an extension and probably would not be successful with a basic RRT alone. I identified Theta* RRT as a promising technique for solving my particular path planning problem.
http://idm-lab.org/bib/abstracts/papers/icra16.pdf

But I ran out of time and confidence in my ability and went with the what I already understood fairly well, which was A*, and wrote about it in this blog (but never got around to publishing it on GitHub).

Since I moved in 2017, nearly all my projects involve making makerspaces. Instead of making projects, I help other people create projects. And recently, rather than helping other people create projects, I create systems to help people help other people to create projects. A big theme in my life is institutional sustainability/long term institutional survival/development of infrastructure that is not highly dependent on a single individual's expertise, presence, and commitment. I am trying to automate my job (not that I don't enjoy doing it, but it feels like the responsible thing to do; too many makerspaces and other institutions have serious existential risk posed to them by there being one leader upon whose shoulders everything rests).

 I haven't had too much time to collect my thoughts into a post, though I will keep trying in the future.

I revisited Theta* RRT nearly two years later in the context of a graduate level robotics class I signed up for, perhaps foolishly.

The robotics class was excellent, but a huge time sink. For the first time in two years I wasn't teaching a three hour lab based class once a week, so I figured that freed up about 10 hours a week to spend on this. The professor for this class said to expect to spend 5 hours a week on average on the class, so I figured I was good...

There were 5 assignments due over the 13 week semester. I spent only 15 hours from start to finish on at least two of them (but those were the easiest ones). If they had all each taken 15 hours, 15 hours * 5 assignments / 13 weeks is about 5.8 hours a week. But many of them took me much, much longer than 15 hours. I probably spent 50-60 hours on the Theta* RRT project. On top of that were required and optional readings. These weren't really enforced. I hate to admit I did nearly none of those readings. I was really struggling to balance my job and my desire to perform really really well in this class, and so I tried to limit my time expenditure to only those things that directly would affect my grade in this class. I guess when I was a kid I had to make compromises like 'do well in school' vs. 'socialize and have fun.' But now I make mostly compromises like 'do well in school' vs. 'spend more time on work' and I'm starting to really feel like an adult...

Fast forward to the end of the Spring 2019 semester. I'm trying to finish my final project, for which I've chosen to implement Theta* RRT in Python, and also my final other assignment, which is an implementation of RRTs in C++. I pushed all my other commitments forward and sat at my dining room table working for 15 hours straight on the last day, accidentally causing what might be permanent nerve damage to one of my legs (it has been a month and still I have a large area without much skin level sensation on my shin). I do feel that it was worth it, though if I had known I would have gotten a more comfortable chair which would have prevented that from happening. I would have also found a way to make more time for this project in the weeks prior.

User-uploaded image: Screenshot+from+2019-05-07+19-56-40.png

The rapidly exploring random tree works like this (glossing over details):

There is a starting position and orientation for the car (red, above) and we want it to reach the goal position and orientation (green, above). The car can't steer very sharply; it has a limited turn radius. There are many ways it could reach the goal, but it is impossible to search them all, because the x and y position, as well as the angle of the car, are all continuous real variables, so infinite and uncountable.

So at every step of this algorithm, which we will repeat until we find some path that drives us to the goal within a tolerance, we will randomly choose some x,y position and car angle.

We're going to keep track of paths in this process in the form of a tree, branching from the red car (starting node). We'll identify the configuration in the tree nearest to the new random configuration (note that in the beginning, the only thing in the tree is the starting node).

Then we will try to drive to the new random configuration. We probably can't find a way to drive to the new configuration exactly, but we'll drive towards it and wherever we land, we'll add that to the tree, keeping track of that steering path we used to get there. We'll always check to see if we made it close enough to the goal, and if we did, we'll extract the path from the tree that got us there.

The entire tree is shown in black, and the extracted solution path in blue.

User-uploaded image: Screenshot+from+2019-05-07+19-58-08.png
One of the very lucky trials for the plain RRT. Note how it considered backing up in one iteration, but otherwise explored directly to the goal.

Parallel parking:
User-uploaded image: Screenshot+from+2019-05-07+20-23-10.png

User-uploaded image: Screenshot+from+2019-05-07+20-18-34.png


The Theta* RRT project took longer than expected and I didn't complete everything I wanted to put into it. So I'll leave a more in depth post about it for after I get around to revisiting it and completing all the parts I wanted to complete. You can follow along on GitHub:
https://github.com/eshira/theta-rrt

Tuesday, January 22, 2019

Rhombic Enneacontahedron Post #2

Alright, here's the code. It is a total mess, but I knew I just wasn't going to get around to even starting to clean it up, so I just committed it before I forget about it (and likely struggle to locate it in the future).

https://github.com/eshira/polyhedra/tree/master/Rhombic-Enneacontahedron

I figured out the degree of freedom I forgot about. When you truncate, you can move new vertices in along old edges some amount, which you can choose. I split edges in three, moving each new vertex in by a third (see the diagram from wikipedia that I included in the previous post). Splitting in three makes sure you create regular hexagons for the soccer ball step (the truncated icosahedron). You don't have to create regular hexagons, and by modifying this you can make slim and broad rhombi, as shown in the images below.

This explains why I didn't match the values for the angles that I found on Wikipedia for the rhombic enneacontahedron. I suspect that most of these rhombic enneacontahedrons people make come from just a few instructional guides or paper folding guides online, which often refer to slim and broad rhombi, so probably they aren't making regular hexagons in the truncation step, which seems to yield very similar rhombi so that you can't easily call them broad or slim.

Extreme cases caused by changing parameter of truncation step seem to be less 'round'



Here's the final 3D printed assembly, after gluing in the magnets. The edge length on the edges the white and red tiles share are 2cm long, for reference



It would be fun to print the exaggerated versions with very broad and very slim rhombi at some point in the future.

Want to make your own? The code isn't super clean, so the steps at a high level:

  • If you want to modify the truncation, you'll need to locate that section of the code, modify the appropriate line, and then get the code to spit out all the info you need on the angles and side lengths that are needed by the OpenSCAD script. There are a bunch of poorly commented areas where you can follow my pattern--I was working quickly and I just didn't make it easy to use yet. Sorry!
  • If you keep the angles as in the OpenSCAD script, you'll make a shape like the one I made. You need to make sure the user controlled side length is the same for the tiles of type A and B. You also need to make sure you match the magnet info between those two types of tile.
  • You can do this with just one magnet per edge if you want to save on magnets, though it won't be as strong and the tiles can pivot until locked into place by neighbors. You will need to go through my code because I didn't make it parametric either
Easiest way to do it? Keep everything exactly the same as my OpenSCAD code, render the STL and slice it for your 3D printer. Get at least 90*4 magnets, 3mm diameter by 2mm height, axially aligned. The magnet diameter and depth values in the script might still need to be adjusted to get a nice fit, depending on your 3D printer.

Wednesday, January 9, 2019

Rhombic Enneacontahedron

Over a year ago I decided the next 3d printed magnetic tile based polyhedron I would create would be the rhombic enneacontahedron.

In the past I created shapes mostly by looking up the dihedral and face vertex angles from the internet. Only for the most recent one, the parametric pyritohedra/dodecahedron script, I generated the points of the whole polyhedron and manipulated them, before creating the individual faces as tiles. For the trapezohedra, some of the ones I attempted to create had the wrong angles, and it wasn't clear to me if the issue was in the angles I had gotten from the internet or if my OpenSCAD script contained some errors. I decided though that the right way forward would be to build the tiles as I did in the pyritohedra script, by creating the full polyhedron and then computing the angles for the tiles myself.

I had just learned about Conway polyhedron notation and considered that the right way to build this would be to use that concept. But I didn't end up getting a library for Conway polyhedron notation, or creating a library for these operators. Instead I did everything a bit less abstractly and elegantly than I'd have liked, but I think that's normal to expect for a first draft.

I switched from OpenSCAD to python at this point, because I find it impossible to do anything with for loops in OpenSCAD (highly inefficient and slow). For creating polygons in 3D space I used pyny3d, which is already integrated with matplotlib for drawing. And I use numpy for math stuff. I finally switched to python 3 for this and wrote every print statement incorrectly the first time...

We begin with the icosahedron:


For the icosahedron I started with a list of 12 vertices as listed on various places online.
Each of the below descriptions of vertices in format (x, y, z) expands into four vertices given there are ± options in two of the dimensions for each one.
(0, ±1, ± φ)
(±1, ± φ, 0) 
(± φ, 0, ±1)
Where φ (phi) is the golden ratio, 0.5*(1+sqrt(5))




The first goal is to truncate the icosahedron. I came up with my implementation of truncation by interpreting the picture and one line description from Wikipedia, which I'll include below, and is drawn on an example cube:


Truncate cuts off the polyhedron at its vertices but leaves a portion of the original edges.[10]

I describe truncation like this:
For each vertex, find every edge is is a part of, and create a vertex one-third of the way inward on that edge. The new shape's vertices are the newly created vertices.
The new shapes faces I separate into two categories.
The first category are created by drawing edges between all the new vertices spawned off a single old vertex. In our case, the icosahedron has vertices at 5-edge intersections, so these are five sided shapes (pentagons).
The second category are the truncated versions of the original faces. For each original face, consider each original vertex. That original vertex spawned some new vertices, but only two lie on this original face that we are currently evaluating. The new face that will replace the old face is defined by all of these new vertices corresponding with the old face, that still lie on this face.
We had triangular faces with the icosahedron, so with two vertices per original vertex multiplied by three original vertices, we have created six vertex faces (hexagons).



The truncated icosahedron is a soccer ball, shown above in red pentagons and blue hexagons.

Now we must run a join operation on the truncated icosahedron. Again, expanding from only the short wikipedia entry:

Join creates quadrilateral faces. 

That's not a lot to go on... To better understand it I considered a similar operation, kis:


Kis raises a pyramid on each face

Okay, so join is like kis without the original edges. The old vertices can stay put, but the new vertices, created in the middle of each face, need to be lifted. But by how much to lift them? And where to put them to center them on the face?

I had gone over the math for finding the vertex on the face, when I went over the math to define the dual operator. Luckily for me the pentagons and hexagons are regular, so I could just take the mean of the points in this case.

The insight for how to lift the vertices came from the symmetry. Some of the newly created quadrilaterals will go from one hexagon center point to another hexagon center point. To preserve symmetries, I assumed that all hexagon center points will be lifted the same amount. I drew a quick diagram to figure out the computation for how far to raise them in order to make all four points coplanar (otherwise, we don't have a valid face. At time of writing, this site which I enjoyed using in the process of learning about Conway operators, seems to have some nonplanar 'faces', so it looks very strange).

New quadrilaterals connecting old hexagon centers to old pentagon centers must also be created, utilizing two old vertices that come from the edge shared between the adjacent hexagon and pentagon. Since we finalized the height of the point raised off the hexagon center, and we know the two points from the shared edge are the same as they were before, the only thing to do is compute how far to raise the pentagon center along the norm of the pentagon face until it lies on the plane defined by these other three points.

To draw the faces I needed to keep track of which faces were adjacent to which other faces, so I created some dictionaries and populated them with simple for loops because I'm not concerned about optimizing my code at all given what I'm using it for.



The next step was to compute and print out the angles I needed to create a tile. This was just a question of identifying the right vertices, edges, and vectors and computing various norms and cross products.

For the tiles I moved back to OpenSCAD. I reused my rhombic triacontahedron tile code for the parallelogram faces (red). For the kite faces (blue) I created some new, but very similar code.

Here's the info printed out, there's definitely going to be some rounding error but not significant to my purposes.

There are two dihedral angles:
Red to blue faces: 159.4455573432933 degrees
Blue to blue faces: 160.81186354627906 degrees

The red parallelogram faces have an acute interior angle: 56.67993474152847 degrees
And the complement (computed just to check if correct, but not needed as additional info): 123.32006525847153 degrees

The blue kites are more interesting. The skinniest angles are the tips of the five pointed stars they create.
56.67993474152846 degrees

The opposite angle, which lies in the middle of those stars where they meet in groups of five, is close but certainly not the same angle:
69.7316519531762 degrees

It is a kite, so symmetric in this remaining dimension, and both of these larger angles are the same:
116.7942066526476

This didn't seem to match the info on Wikipedia and I'm also not seeing such a clear distinction between 'slim' and 'broad' rhombi. I wonder if I did something wrong, though so far everything is internally consistent for my construction. I wonder if there's an assumption I made that got rid of one degree of freedom....

Finally I needed some info about the edges. The red edges are all the same length. And, they match the length of the slightly longer of the edges of the blue tiles. The other type of edge is the ones that the blue tiles share with each other. I call this a 'short edge.' 

I computed a long side: 0.4339800705873414
And a short side: 0.36037602329028573

And the ratio of short to long as their ratio:
0.36037602329028573/0.4339800705873414

I ordered a few kinds of magnets in sufficient quantity (I need 90 faces * 4 magnets a face, but I might also double up and do 2 magnets an edge or 8 magnets a face). While I wait, I am 3D printing some tiles just to tape together and see if the angles work. So far it is looking promising.

parallelogram faces model, shown with magnet holes


 kite faces model, shown without magnet holes


They look fairly similar to the untrained eye...

Taped together with masking tape as a test....red parallelograms and white kites. They were all printed in white, and then I used a sharpie to color the parallelograms in red.



A five pointed star of kites.



Since these come together nicely, I think I have all the vertex angles and the dihedral angles correct.

I've always used either masking tape (testing) and magnets (super-glued into the slots) to construct these, but somebody who printed one of my designs and tried gluing them together noticed that there appeared to be an accumulating offset problem that grew as the shape was constructed and made it so it wouldn't close. It seems that you need to leave some flexibility in order to distribute errors (that come from the manufacturing process or mis-alignment during assembly). Some ideas I haven't tried include: putty, velcro.

Once I've had a chance to build the whole thing, I'll follow up with another post, and clean up the code and commit it to GitHub as well.

Monday, December 3, 2018

Joystick Color Wheel with 3 Op Amps

I love microcontrollers but I've seen one too many 'Raspberry Pi deployed in order to blink a light' projects. Don't they know you can do that without a computer? They might not know.

I was sitting at the hardware table at HackUMass and watching everybody check out Arduinos and Raspberry Pis and ignoring the transistors. So I thought I'd make a few simple circuits for demonstration. First was a simple flex sensor controlling an LED. Then I inverted the behavior--the flex sensor turned off the LED instead of on. Then I did the same sort of thing with the potentiometers from a joystick. It made sense to upgrade to an RGB (red green blue) LED. But there are two potentiometers (and one switch) on the joystick, and three colors in the LEDs. What kind of behavior would be most satisfying?

It was suggested that I implement a classic colorwheel. Three axes, set apart 120 degrees from each other, for red, blue, and green.



Okay, perfect--I can do that with some op amps. There are two axes (potentiometers) on the joystick and each is configured as a voltage divider. We need to make a weighted sum of the X direction and the Y direction outputs of the voltage divider to create the Blue and Green directions. The red is aligned with the Y axis already.


https://en.wikipedia.org/wiki/Operational_amplifier_applications#Summing_amplifier

I went through a few plans for the design. I ultimately settled on the LM358N chip using a single-sided supply and a virtual ground. The virtual ground I set to half Vcc with a simple voltage divider, guessing that the joystick rests at half Vcc (might not be completely true).

The Blue direction sits 30 degrees below the X+ direction. X*cos(30) + Y*sin(30) implemented in a summing amplifier--that's the first op-amp. For the Green, I used the same calculation, but flipped the X axis using an inverting amplifier, so that takes two more op amps for the Green axis. There are two op-amps per LM358N, so that's two ICs.

Each axis controls an NPN BJT transistor (I chose the 2N4401 for this). The red axis is the output of the joystick Y axis voltage divider controlling the transistor. It looks like it would go in the wrong direction (-Y) but because the summing amplifiers also invert the value relative to the virtual ground, everything ends up working out.

Finally there are some potentiometers inline with the base resistor to allow calibration of the three color channels. I found I got best results when the lights are all on and balanced for a medium white in the middle default joystick state.



This description wasn't very detailed, but it isn't a tutorial. I'm hoping to create a series of high quality instructional resources in the future, and I'd create a module on this circuit as part of that. It takes a lot of time to create high quality content though and I just don't have time at this moment. So for now, if you want to make one, I'll leave the details of implementation as an exercise to you, the reader.

Finally here's the video. I put a piece of plastic on the RGB led to get the diffuse light, because it had a clear package. That made it much nicer to look at, but the video still suffers from poor dynamic range.



Wednesday, November 28, 2018

Ubuntu won't boot, waiting on dev-mapper-cryptswap

I updated Ubuntu 18.10 cosmic and of course everything was crappy. The trackpad started registered spurious touches of my palm in the upper left corner and made typing very frustrating. And then the machine started hanging with a weird window manager (GNOME?) glitch after I had it in sleep mode all night with the lid shut, so I had to hard reboot in the morning.

This happened twice and then it wouldn’t reboot anymore. It would hang on the purple screen with the word 'ubuntu' and some loading bar dots. I force rebooted (hold shift for GRUB) and selected the option 'Advanced' and chose a recovery mode option of the latest version. Then I could see that we were endlessly waiting on dev-mapper-cryptswap1.device and that is why it will not boot, not even in recovery.

I found several sites suggesting I edit the /etc/fstab file and comment out any lines talking about cryptswap. (to do note for myself for later: figure out if I should make an encrypted swap, or just live without the swap)
https://ubuntuhak.blogspot.com/2017/05/a-job-is-running-for-dev-mapper.html

OK but if I can’t boot, I can’t access any shell! How do I edit the fstab file??? I found this answer:
https://superuser.com/questions/1013658/how-to-skip-startup-jobs-for-fstab-no-timeout-centos7

But I was missing the context for it. Where do I enter that emergency boot parameter?

I was getting sick of typing google search queries in to my phone, so I went back to Grub and looked around. In the advanced options section, the text suggests you can press ‘e’ to edit the entry. Then I saw something like this:

setparams ‘Ubuntu, with Linux 4.18.0-11-generic’
    recordfail
    load_video
    gfxmode $linux_gfx_mode
    insmod gzio
    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
    insmod part_gpt
    insmod ext2
    if [ x$feature_platform_search_hint = xy ]; then
      search --no-floppy --fs-uuid --set=root [uuid]
    else
      search --no-floppy --fs-uuid --set=root [uuid]
    fi
    echo ‘Loading Linux 4.18.0-11-generic …’
    linux /boot/vmlinuz-4.18.0-11-generic root=UUID=[the uuid] ro acpi_rev_override quiet splash $vt_handoff
    echo ‘Loading initial ramdisk …’
    initrd /boot/initrd.img-4.18.0-11-generic 

So that seemed promising. I entered a -b into the /boot/vmlinuz arguments, as such:

linux /boot/vmlinuz-4.18.0-11-generic root=UUID=[the uuid] ro acpi_rev_override quiet splash $vt_handoff -b

Then hit control + x as the instructions suggested to boot. This doesn’t permanently change the options, but rather boots with this modified entry just this once. So I entered emergency mode, hit control + d as the instructions suggested, and I was in.

Back to the instructions from the answer from before: https://superuser.com/questions/1013658/how-to-skip-startup-jobs-for-fstab-no-timeout-centos7
In case this leaves the root file system read-only, you can run mount -o remount,rw / once in the shell.
I didn’t give this a try without doing that; I just assumed it was necessary, and that had I skipped that I would have found that I was looking at my file system in read-only mode.

So back to https://ubuntuhak.blogspot.com/2017/05/a-job-is-running-for-dev-mapper.html
The solution is to remove or comment out the "cryptswap" entries from /etc/fstab and /etc/crypttab. This can be done easily by editing the above mentioned files as commenting out the lines that say cryptswap by placing a "#" in front of the matching lines.
I did that, saved the file in nano (I forgot how to use nano, but the bottom of the screen suggested some commands, so I did the one for exit and then it asked me to type Y to save before exit). Then I restarted the computer, I believe with the command shutdown, and then pressing the power button afterwards to reboot.

Then I figured before I got back to work, I could make a short blog post out of it. Here you go.

-Shira

Tuesday, June 5, 2018

CRT and Magnets Exhibit



This Cathode Ray Tube + Magnets exhibit started off with a couple of CRT monitors that were gathering dust in storage. I was asked to consider putting them out in the main space of the makerspace. I decided I would only allow this if they did something. I set out to decide on what that something would be. It turned into a fun, easy, accessible to all ages exhibit that we now turn on for all the tours we lead through this makerspace. It is a great way to quickly and cheaply construct a meaningful interactive science exhibit to add to your collection.




Here's a document I put together to explain what's happening. I taped this to one of the TV antenna (the antenna is not used) so it stays front and center to the exhibit and people are encouraged to actually read it. Here's a lower quality image of the same document so you can see it embedded in the post.


Here's the bill of materials:

  • CRT television. You may have to turn to eBay or Craigslist. These may only get harder to acquire with time.
  • If the CRT has VHF/UHF inputs, you'll need a box like this one to convert the signal to composite video: https://www.amazon.com/dp/B0014KKV7W/
  • For the camera, the backup camera is cheap and outputs a composite video signal over RCA connectors. https://www.ebay.com/itm/CMOS-Car-Rear-View-Reverse-Backup-Camera-Parking-Night-Vision-Waterproof-7-LED/291918612347
  • If you'd like to place the camera elsewhere, you can get a cheap 2.4GHz transmitter like this one https://www.ebay.com/itm/2-4G-Wireless-Video-Transmitter-Receiver-Kit-for-Car-Rear-Backup-View-Camera/163041336550
  • A powerful neodymium magnet. You want something strong enough to have an effect and a good size to be easy to handle. Something like 0.5 inch diameter and 0.5 inch height seems like a good size to me, but I'd suggest just seeing what's available and trying it. You can always stack up multiple smaller magnets.
Assembly is just a question of plugging everything in to power and getting the signal in to the TV. Note that the backup camera requires a little work to plug in; it is designed to be hooked up to a 12V car battery connection point. You will need a 12VDC wall adapter if it does not come with one. You will need to make sure you have the red/positive connector and black/negative connector going to the right places. You will probably need to solder at least one connection, or use another strategy to connect the wires.

The magnet should be protected with something soft to avoid it hitting against metal and breaking or pinching fingers. I used two furniture feet and some masking tape. I also suggest putting it on a string so it doesn't wander off.

Cable management was the longest part of the project. I zip-tied all the extra length of cabling in back of the assembly. A single switch controls the extension cord to which the entire unit is plugged in. Note the CRTs make a high pitched noise that some people don't like to hear all day, so I don't leave it on all the time, since the main room of the makerspace is also a meeting and study space.

The cameras are taped to the top of the CRT and pointing at brightly colored pieces of paper. This is important because the magnet effect is not nearly as visible on black & white images.


An additional, optional modification I made to the back up cameras was the removal of the infrared lights that it contains in order to provide better visibility at night. The camera was very warm and when the IR lights activated, which sometimes happened if the exhibit experienced low light conditions, the image was washed out. I opened the unit and de-soldered the IR LEDs. My original plan involved putting gaffer's tape over the IR LEDs, but that seemed to make the camera heat up even more. There is a sensor inside the unit and in bright lighting the infrared LEDs are not activated, so this is not necessarily something that needs to be addressed for the exhibit to function.


It is also important to note that the backup camera is designed to mirror images. Note the sign in the image above is printed as a mirror image in order to show up correctly on the display. The backup distance overlay is another artifact of the choice to use a backup camera; I find it is fun and adds to the color distortion effect since it is displayed in bright colors.