Wednesday, December 25, 2013

Adjusting water hardness and pH of a beer mash

I had the opportunity earlier this week to work at my brother-in-law Chris's microbrewery, and it was a blast.  The day before that we had stopped by and he showed me around.  It was amazing to me that they had started from nothing not too long ago (< 1 year), and now they had a row of fermenters, grain mill, and several other pieces of equipment whose names I don't know, but I know what they do from home-brewing experience.  What was exciting for me personally was that a lot of the technology and equipment was familiar to me from grad school!  I studied surface science: how Nickel metal can catalyze chemical reactions.  We used steel chambers under ulta-high vacuum, and so we had lots of pumping systems, cooling systems, pneumatic systems, and the chambers were connected to gases via steel lines.  All of this is present at the brewery as well other equipment.  Anyway, it was great to "see some old friends" as we used to say when we'd see a familiar piece of equipment in a different setting.

The main thing we worked on was investigating the pH (acidity) of the mash.  A general recommendation is that the pH of the mash should be in the range 5.1 to 5.4 but Chris had consistently been measuring it to be 5.7.  Not a huge problem, but in the quest for better beer it seemed like a good idea to figure out what was going on.  We started discussing on the way there.  The city water was reported as being pH 8, and Chris had a pH meter and had made measurements at various locations by sampling the water there and under various conditions.  We decided to do something similar, write it all down, and then try a mini-mash and see if by adding some salts and/or phosphoric acid we get the pH into 5.1 - 5.4 range.

Edit:  pH is important because it affects the enzyme activity during the mash, and the yeast activity during the fermentation:
http://byo.com/fruit-beer/item/1493-the-power-of-ph
http://www.howtobrew.com/section3/chapter15-2.html

Sunday, April 14, 2013

Horseshoes, hand grenades but not black hole event horizons: Being close to the horizon is not good enough

I do not currently really understand the math of general relativity, but based on some stated rules about the behavior of light around black holes I've created a model in my mind that I would like to write down and then explore and/or destroy.

Start with an event horizon:
http://en.wikipedia.org/wiki/Event_horizon
In general relativity, an event horizon is a boundary in spacetime beyond which events cannot affect an outside observer. In layman's terms it is defined as "the point of no return" i.e. the point at which the gravitational pull becomes so great as to make escape impossible. 
Outside the event horizon of black hole is another interesting boundary - the photon sphere:
http://en.wikipedia.org/wiki/Photon_sphere
photon sphere is a spherical region of space where gravity is strong enough that photons are forced to travel in orbits. The radius of the photon sphere, which is also the lower bound for any stable orbit, is:
r = \frac{3GM}{c^{2}}
which is one and half times the Schwarzschild radius. 

Saturday, February 16, 2013

Some references and thoughts about the meteor explosion

http://en.wikipedia.org/wiki/Iron
http://en.wikipedia.org/wiki/Chondrite

300 kilotonne explosion
> 100 kilotonne explosion

18 km / s (at entry?)

15 m size (diameter?  radius?)

  • volume if radius = 14137 m^3 ~ 14e3 m^3 = 14e9 cm^3
  • volume if diameter = 1.75e9 cm^3


7000 metric tonnes

  • 7e9 g
  • density if size is radius:  7e9 g / 14e9 cm^3 = 0.5 g / cm^3
  • density if size is diameter:  7e9 g / 1.75e9 cm^3 = 4 g/ cm^3
  • density of iron:  7.874 g/cm^3


chondrite rather than iron, iron tends to reach the earth

  • a hodge podge, but basically / mostly an oxide
  • not as dense as iron
  • not as heat conductive?
exploded at 15-20 km altitude
energy release began higher at 50 km altitude



google translation of statement from Russian Academy of Sciences:
This morning in the city of Chelyabinsk registered a decline of the cosmic body, which caused a bright flash of light and a strong shock wave. Reported shattered windows in homes. We estimate the size of the body was a few meters, the weight of the order of ten tons, the energy of a few kilotons. Body fell into the atmosphere at a speed of 15-20 km / s, collapsed at an altitude of 30-50 km, the movement of the fragments at high speed caused powerful glow and a strong shock wave. The main part of the substance of the falling body has evaporated (burned), the fragments that remain stalled and could fall to the ground as meteorites. Usually, the total mass of meteorites is found no more 1-5% of the initial mass. The main energy will be released at an altitude of 5-15 km. The bodies of this size are falling quite often, several times a year, but usually burn at high altitudes (30-50 km). Considered body seems to be very strong, probably iron. The last time a similar phenomenon was observed in Russia in 2002 (Vitim bolide). More accurate estimates can be given after receipt of all the information available.
http://www.ras.ru/news/shownews.aspx?id=1da2959b-902f-46b2-9f1f-0c62d19740e8#content


Why do meteors explode:
http://www.livescience.com/27188-russian-meteor-explosion-faq.html
Asteroids are just chunks of rock, so what makes them so explosive? In a word: speed.
The kinetic energy, or energy of motion, of a speeding asteroid is enormous. The Russian meteor entered the atmosphere going 40,000 miles per hour (64,374 km per hour), Bill Cooke, lead for the Meteoroid Environments Office at NASA’s Marshall Space Flight Center in Huntsville, Ala. said in a NASA press briefing.
The chunk of asteroid or comet that caused the 1908 Tunguska event is estimated to have entered the atmosphere at about 33,500 mph (53,913 km/h).
The shock wave from an asteroid's interaction with the atmosphere heats up the rock, essentially vaporizing it, Boslough said. The hot vapor then rapidly expands in the atmosphere, with explosive results.
"It's just like TNT going off, only much more energy," Boslough said.

  • Ideas: 
    • leading edge is super heated above boiling point
    • Or:  because pressure on leading edge is high, boiling point of material is much higher
    • Material is not heat conductive so either way there is thermal gradient, thermal stress.  
    • At some point the object cracks / fractures - this causes "instant" vaporization - 
      • either the superheating condition is triggered and the material goes to equilibrium - the vapor phase
      • or the fracture/crack causes the fragments to rotate so that pieces that were previously on the leading edge are no longer facing their individual direction of travel.  With the pressure reduced the material vaporizes "instantly" causing the explosion


Sunday, February 3, 2013

Factorial Design of Experiments

This blog post is about how scientific experiments can be designed such that the system being tested does not have to be measured at every possible combination of variables, or if it is how second order effects between variables can be calculated.  I'll work through an example to illustrate the principle:  in this example the "system" being tested is brewing beer.  Considering that brewing a batch of beer takes at least 2 weeks, involves many hours of work, it could be very worthwhile to find a way to get the same information from fewer experiments.

For some examples of practical applications / examples of factorial design of experiments (and statistical design more generally) here are some papers that Joshua L. Hertz and I wrote when we were in Stephen Semancik's group as post-doc's at NIST:
Combinatorial Characterization of Chemiresistive Films Using Microhotplates
A Combinatorial Study of Thin-Film Process Variables Using Microhotplates


Introduction

Assume that we are interested in the effect on the color of the beer of three variables:  ferment temperature, type of yeast used, and mash temperature.  We will test each of these variables at 2 different settings:
ferment temperature:  50 F, 45 F
type of yeast:  WP004, WP005
mash temperature:  145 F, 150 F

With 3 variables that have 2 settings each there are 8 possible combinations of experiments that can be run.

These three variables and their 2 settings each can be represented as a cube, where each dimension is a variable, each side has fixed value for one of the variables, and each vertex represents one of 8 possible combinations of the variables:

Saturday, January 19, 2013

An unbiased random number generator: references / classic solutions and a 2nd try

In a previous post I discussed my first attempt to create "an efficient unbiased" random number generator.  Efficient may not have been the right word to use;  friends have subsequently pointed me towards the classic solution to this problem:
http://stackoverflow.com/questions/1986859/unbiased-random-number-generator-using-a-biased-one
The events (p)(1-p) and (1-p)(p) are equiprobable. Taking them as 0 and 1 respectively and discarding the other two pairs of results you get an unbiased random generator.
Basically, treat a 0 followed by a 1 as a 1; treat a 1 followed by a 0 as  0.  If a 0-0 or a 1-1 occur, re-roll.  This is efficient and easy to implement.  One problem however is that it has an unbounded upper limit on its run time.  Related, as the fraction p gets closer to 1 or 0 the number of re-rolls required increases.  Note that  the above StackOverflow answer links out to a solution which solves these problems!
http://www.eecs.harvard.edu/~michaelm/coinflipext.pdf

I modified the algorithm / code from my previous post to attempt to remove the bias that was still present.  The previous algorithm and the new one both share the benefit of having well-defined upper bound on the run time.  The new algorithm calls BIASED N times, and compares the fraction of 1's returned to the previously obtained distribution of 1's.  If the ratio of the current fraction to the previous fraction is greater than 1.0, then it returns a 1, otherwise a zero.  After each run, it updates the number of calls N to make sure that it is high enough such that the increments in fractions is of the result is close enough to the running fraction of measured of 1's and 0's.

The result is an algorithm that is still biased, and that is much less efficient than the Von Neuman algorithm!  The code is available at github anyway:
https://github.com/dllahr/sandbox/tree/master/intro_to_alg/5.1-3/second_attempt

Update:  A thought occurred to me that my algorithm is basically treating this like a control problem.  I'm using a feedback loop to take the observed bias of BIASED (the error) and compensate accordingly as needed.  The reason it does not work perfectly is that basic control theory says that response should be proportional to error, derivative of the error and the integral of the error:
R = a*E + b*(dE / dt) + c*(integral(E dt))

In both of the algorithms I've written I'm just using the integral of the error.

Why does the above algorithm require many more calls?  I think the short answer is provided from the StackOverflow entry linked to above (1st link), as it describes an enhanced Von Neuman solution to the problem (2nd link):
Further on, the paper develops this into generating multiple unbiased bits from the biased source, essentially using two different ways of generating bits from the bit-pairs, and giving a sketch that this is optimal in the sense that it produces exactly the number of bits that the original sequence had entropy in it.
I am not sure exactly how to apply the above to the algorithm I wrote; at best I would say that converting UNBIASED results to numbers / doubles is wasted information.


Wednesday, January 16, 2013

Attempting to create an Unbiased random number generator (with defined upper bound runtime) from a Biased random number generator

This is based on a problem in Chapter 5 of Introduction to Algorithms.  Basically, the problem asks:
Given a random number generator that generates 1's and 0's that is biased (BIASED) such that it generates 1's p fraction of time and 0's (1-p) fraction of the time, write an algorithm that generates 1's and 0's with an even distribution.  You do not know p (p is not an input to the algorithm).
Summary:  I discuss how I attempted to solve the problem, statistical & performance results of an implementation of the solution, and a problem with the implementation.

Update:  from comments and friends' help, a new post summarizing classic / efficient solutions and a second attempt:  http://dllahr.blogspot.com/2013/01/2nd-try-unbiased-random-number.html

I assumed that I couldn't just measure p as the start up / initialization of the algorithm.  Here's how I attempted to solve this problem:

  1. create an array with 2 entries
    • The first entry is the cumulative number of 0's returned by BIASED
    • The second entry is the cumulative number of 1's returned by BIASED
  2. when generating an unbiased number:
    1. determine which array entry is smaller - the smaller indicates the value that we must favor in order to balance the distribution - call this the "target value"
      • if the first entry is larger than BIASED currently appears to be biased towards 0's
      • if the second entry is larger than BIASED currently appears to be biased towards 1's
    2. take the ratio of the array entries such that the ratio > 1
    3. call BIASED repeatedly until either the target value is returned or the number of calls equals the ratio
      • during each call store the results in the array above
      • NOTE:  do not update the ratio
    4. if the target value was returned from BIASED, return this result
    5. if the instead BIASED was called ratio number of times without returning the target value, return the opposite of the target value
      • i.e. if target value was 1, the opposite value is 0
I implemented the above in c++ (Cygwin g++ to compile), available at github here