Is Recycling Worthwhile?

Hi Bestest of Buddies!

I hope you’ve been able to enjoy some walks by the lake to enjoy this splendid Fall weather!

This week, I read an article. It claims that recycling isn’t really cost-effective, and that the companies that make new plastic have misinformed the public about this fact in order to make consumers feel better about using plastic. This brought to mind conversations that I’ve had with people in the past, who had heard that the plastic you put in the recycle bin just gets thrown in the landfill in the end — so why bother.

The article brings up some great points that I want to discuss with you. But it also oversimplifies the situation. I don’t have a ton of time tonight, but I want to take a minute to describe the system to the best of my current understanding.

A lot of recycling gets collected in cities via a “single-stream” system, where consumers place all their paper, plastic, glass, and metal recyclables into the same bin on the curb. This makes recycling easier on consumers — they don’t have to take these items to a collection facility themselves — so a lot more recyclables are probably collected than otherwise would be. But in order to actually recycle this stuff, all the different types of materials have to be sorted out (and any trash people wrongly or mistakenly added has to be removed).

This brings us to the Materials Recovery Facility (MRF). These facilities use humans and/or fancy (and super expensive) machines to separate recyclables into their separate types. Each type gets packaged (often in bales) and sold to recycling plants specializing in that specific material.

That’s where my company comes in! We buy bales of nasty (but sorted) polypropylene from MRFs, grind it and clean it, and melt it down to make pellets that can be injection molded into new items for people to use. My company doesn’t make use of government subsidies, and most of our customers don’t even pay a premium on account of the “post-consumer recycled (pcr)” content in the plastic they buy from us. And we make a profit doing this.

So, from my perspective, it’s tempting to discount the article I read. After all, I’m recycling plastic and making money at it. So clearly it is worthwhile to recycle plastic. But I’ve got to be careful not to oversimplify things! Let’s take a look at some additional information:

Fact 1: Sorting is HARD!

Many MRFs don’t separate the different plastic types very well. And that’s a problem for us, since 5% PET (#1 plastic) mixed with our polypropylene (#5) is enough to bring our machines to a screeching halt. Our grinding and cleaning process does a great job of removing all the “random crap” from our polypropylene, but then we have to landfill all of that random crap. It isn’t rare for us to see 25% loss of total weight over the course of our cleaning process. And we’ve seen much worse at times. So, in order to procure bales of post-consumer plastic clean enough for efficient processing, we often find ourselves shipping in material from many states away. This is sad, because I’m sure that there’s millions of pounds of polypropylene flowing into landfills across Missouri and Kansas. It’s also worth noting that I don’t have any idea how profitable it is for MRFs to sort plastic.

Fact 2: Product Design is Important

It makes me sad every time I see a plastic water bottle. Your everyday water bottle is made of PET (#1). But the label is either made of paper or a thin film of polypropylene (#5) printed with acrylic inks. And the cap is usually made of high-density polyethylene (#2). This unnecessary mixing of non-compatible material types in so so many consumer products adds to the difficulty, expense, and waste of recycling and makes me sad. Technically, consumers share the guilt for this, since we like to buy items in pretty containers with shiny labels. But I feel that corporations bear the chief guilt and need to re-think their packaging design paradigm.

Fact 3: Recycling Isn’t a Silver Bullet

Recycling uses a TON of electrical power. I can probably look up the numbers for you sometime if you like. But suffice to say that using a plastic bottle and then recycling it afterwards is far worse for the environment than drinking from a re-usable container. It’s better than sending plastic to the landfill. But we mustn’t forget that reducing our plastic consumption and re-using plastic items are better options than recycling.

Ok. It is DEFINITELY bedtime now. So I’m gonna stop here with a quick recap:

Please please please recycle! I promise that it is worth it. And as oil becomes more scarce it’s going to become more important in the future. Yes, there are certainly a lot of examples of cases where plastics collected for recycling end up getting put into landfills. But not all of it ends up there! And I really believe that if we take this issue seriously we can dramatically improve our recycling infrastructure. Yes, we should be skeptical of the information that big plastics companies feed us. Yes we should do our best to reduce our use of plastics. But if we give up on recycling because we read in an article that recycling isn’t worthwhile, then nothing is going to get better.

Have a great night!

-Alexander

Plot Twist: Goodbye Lasers. Hello Plastics.

Dear Mr. Morrow,

As you know, I left my lasers in Illinois and kissed graduate school goodbye about 18 months ago. I spent a little while trimming the lawns at our Alma Mater, but for most of the intervening time I have been working as a some sort of chemist/engineer at a facility that recycles polypropylene (#5 plastic).

I work in a plant where ground up yogurt containers, McDonald’s cups, peanut butter labels, rope, and other diverse forms of polypropylene waste are melted down and mixed together to make small black pellets that we ship out in tractor-trailers. Our customers melt these down at their factories and squeeze the melted plastic into molds to create funnels, car ramps, concrete form ties, and other items.

Before I started working in the recycling sector a year and a half ago, I didn’t have a clear idea of how the plastic, glass, metals, and paper I put by the curb each week were transformed back into useful materials once again. There are still many parts of the process that I don’t understand. But I’m learning more every day. I’d like you to join me this year as I learn more about how recycling works and how we, as consumers, can be good stewards of the plastic in our lives.

I don’t expect my growth in knowledge to be chronological with respect to the plastic lifecycle, so be prepared for a bumpy ride with lots of backtracking. But I think there’s a good chance that it will be fun, all the same.

I hope you had a great new year!

I’m off to play a board game, so I’ve gotta go. Have a marvelous evening.

Alexander

Wikipedia and the Academy

Alexander,

Today I’m going to ramble a bit about something which has bothered me for over a year. I have been continuously irked by how little emphasis is placed on academics curating knowledge for public use. Let’s begin.

I love Wikipedia. Of course, you probably already knew this fact. Afterall, I have shared Wikipedia pages to your Facebook wall 11+ times. I love the egalitarian nature of Wikipedia. I love that Wikipedia desires to be a free, reliable source that allows people access to the sum of human knowledge.1 In the world of skepticism that I live in, the Wikimedia Foundation is the only foundation that I donate to on a regular basis. Wikipedia is known to have many problems—a few of which are identified on their own wiki page,2 but in general I think of the presence of Wikipedia as a massive, net positive in my and many other’s lives.

Much of my education has entailed reading Wikipedia pages. Back in the day when I was taking Dr. Trifan’s Modern European History class I found myself reading hundreds of pages on historical figures. I continuously return to pages like the List of trigonometric identities or the page on the harmonic oscillator. When it comes to working on electrical circuits, Wikipedia saves the day so often (Darlington pair anyone?)! Wikipedia pages have offered me an introduction to pretty much every topic. I love the amount of clarification on a common concept that I can gain merely by reading a wiki page—for instance, the concept of “Silicon Valley“.

As a scientist, I spend a lot of my time reading scientific literature. I also spend a fair bit of time explaining concepts to students, lab mates, my audience at a presentation, etc. In the current academic climate, one major measure of my worth as a scientist is my publication history. I am rewarded more when I publish in high Impact Factor journals. Many times the articles people write in these journals will not be accessible by the general public (the people who fund the research) due to journal paywalls.3 Moreover, these articles generally skirt over background information/formalism, assuming the reader is already trained in that area. Much pain is endured by graduate students in order to move from their generalized undergraduate training to the very specific nature of modern research. Graduate students have years of literature to catch up on and understand. It is important to note: I am rewarded when I understand the old knowledge merely because that understanding allows me to create new knowledge.

Broadly, all of the spectroscopy I do, and that I hope to publish on, is under the scope of “four-wave mixing” (FWM). Thousands of papers have been published developing FWM techniques and using FWM to investigate everything from semiconductors to the proteins which facilitate photosynthesis. But the FWM article on Wikipedia is quite decrepit. Sure, it speaks truth, but there is an entire field of spectroscopy and science that is under the guise of FWM that is not really touched on in the article. Moreover, there is a massive amount of fundamental physics which should be considered in this article. My introduction to FWM would have been greatly aided if I had access to a much better version of the FWM Wikipedia page. This article ought to be heavily expanded, and it ought to be done by an academic like me. I haven’t severely edited this article yet merely because it would be a herculean task,4 but all of my research is a herculean task. How is this different?

Academics sometimes write ‘review’ articles which attempt to provide a concise overview of the primary literature as an introduction or reminder to their reader. In a toned-down way, this is exactly what a Wikipedia article is to me. But, in general, an academic is rewarded when they publish new work in closed journals and not when they carefully lay out fundamental work for non-experts. To put it succinctly, there are reward mechanisms in place to publish work in the traditional venues, but sadly there doesn’t seem to be a reward structure in place to create broadly accessible content in a free encyclopedia—this isn’t something you see on the CV of a professor going up for tenure.

Scientists are rewarded for making new knowledge and not making past knowledge understandable by the masses. But, as the amount of published work balloons it becomes harder and harder to understand the work that has been done. What good is new knowledge if the old knowledge is inaccessible and not understandable by any but expert academics in some sub-sub-sub discipline?5

It ought to be the case that I am rewarded just as much for curating a slew of Wikipedia pages as I am for publishing a review article or piece of original research. The fact that I am not speaks droves about how the academy has prioritized the novel and sexy over actually furthering human understanding. The academy has defined worthwhile knowledge as flashy and not as solid, rigorous, accessible truth about the possibly mundane.

Soon I will write introduction chapters in my PhD thesis that have been written by others time-and-time again. These chapters will carefully build up the theory necessary to understand my novel work in non-linear spectroscopy and semiconductors. Under the standard model, these chapters will be bound into a few copies of my thesis which will then gather dust for the rest of their existence (two for me, one for my advisor’s bookshelf, one for the UW library, one for my mother, and maybe some others). No one but me will benefit from two months of my time constructing those pages. None of the knowledge will be new, it will merely be me mixing together N sources into one cohesive picture.

Shouldn’t there be an imperative for me to take some of the knowledge in my thesis and put it into Wikipedia pages? Is it best for humanity that the academy has equated “worthwhile” work with “novel” work? Shouldn’t we push for curation of old knowledge just as much as we push for creation of new knowledge?6

Darien

And now the footnotes:

  1. See for instance Wikipedia’s lofty language here https://en.wikipedia.org/wiki/Wikipedia:Purpose
  2. https://en.wikipedia.org/wiki/Wikipedia#Critical_reception
  3. I have many thoughts on “open-science.” We should talk of them sometime.
  4. I have made myriad of minor edits on pages ranging from biographies to mathematical theorems. In the case of Irving Langmuir‘s page, I fact checked a paragraph and ended up removing it due to sketchy citations. In many other cases, I have found problems with explanations, but at the time didn’t fix them due to a lack of time on my part. However, I have never spent hours of my time carefully constructing sections in a wikipedia page. I am largely a unilateral user.
  5. There are of course notable exceptions, MIT’s opencourseware is amazing—it attempts to open up course materials to everyone, my favorite is Mildred Dresselhaus’s solid state physics course.
  6. Yes, there exists things like textbooks, but these are meant to learn an entire subject and not meant to address individual topics in detail and entirety.

Multi-dimensional Spectroscopy: A brief intro

Alexander,

Some days I call myself a spectroscopist—a scientist whose primary tool is light. When I think of light, I think of it as an electromagnetic wave—I’ll call it an electric field from now on. In lab I use electric fields to tickle samples. But I don’t just use one field, I use many fields. In my lab these fields are laser beams.

I would like to give you a sweeping overview of what my lab does. To do this, I first need to give you a bit of perspective/context. I expect this post to merely whet your appetite. In the future I plan to write about more specialized topics which have to do with my research (like your request of explaining detection strategies: homodyne vs. heterodyne, or how you can do frequency or time domain measurements).

In days of old the predominant type of spectroscopy was vanilla absorption/reflection/transmission spectroscopy. Light from a source, like a tungsten lamp, is shone on a sample. The amount of light both incident on the sample and reflected off of (transmitted through) the sample is quantified. A mathematical metric is then used to define the reflectance (transmittance) spectrum. Different colors of light interact differently with the same sample. So, a spectrum of intensity vs. incident color can give information about a sample. In general chemistry labs which I have taught, students use absorption spectroscopy to find the concentration of metal ions in solutions, and monitor the progression of a reaction. It is fun to note that the human eye works by observing the light transmitted and reflected off of surfaces and then spectrally resolving that light to give us information about our surroundings. In other words, the human eye/brain uses vanilla absorption/reflection/transmission spectroscopy to interrogate the world.

My lab specializes in multi-dimensional spectroscopy. This is in contrast to 1-dimensional spectroscopy like the spectroscopy outlined in the previous paragraph. There are many ways for a spectroscopy to colloquially be called multi-dimensional, but the most common definition requires the spectroscopy to entail the usage of at least two different colors of electric fields. These interrogating fields can be scanned in their color. Information is then represented as 2D plots of color_1 vs. color_2 with some measured intensity as a z-axis which is generally represented as a heat map. There is another way for a spectroscopy to be multi-dimensional. You can have multiple electric fields which are pulses in time (they can be the same color). You can then change the amount of time present between each pulse. Information is then represented as 2D plots of color vs. delay_time with some measured intensity as a z-axis which is generally represented as a heat map. My lab uses both of these types tricks to do our spectroscopies.

Alexander, I presume you are currently furious with me! I have introduced complexity without telling you why. Why should I go through the effort of having multiple colors of laser pulses and then change their time delays? The traditional answer: because the world is so complex and interesting that vanilla 1D spectroscopy can’t begin to tell us everything about a sample. A more nuanced answer divides the rational into multiple categories.

Some people do multi-dimensional spectroscopy because the extra dimensionality allows them to understand the ground state[footnote: By the ground state, I mean the unexcited and not perturbed system] of complex systems which have 1D spectra which are impossible to interpret due to “spectral congestion”. Spectral congestion is just a fancy word for a system having lots of things giving response at a particular color so that you don’t know what individual things are giving you response. Multi-dimensional spectroscopy can then allow someone to quantify an individual component’s response by providing additional spectral selectivity (the mechanism for how this works is interesting, important, and of no consequence currently). When people want to use spectroscopy to understand the ground state of a system, they are generally interested in the structure of the ground state or the amount of a component in a sample.

Other people do multi-dimensional spectroscopy because they want to know what the excited state(s) of a system looks like. They use the first (or more) electric field(s) to build an excited state and the second (or more) electric field(s) to interrogate the newly created excited state. The spectroscopist can then ask the simple question of “what is the spectra of the excited state?” This can give information about the structure of the excited state, but the spectroscopist can also ask how the spectra of the excited state changes as a function of time.

My group does multi-dimensional spectroscopy for both of the outlined reasons. We also do spectroscopies which entail accomplishing three plus dimensioned scans were we scan pump color, probe color, and the time delay between them (and even more dimensions). Broadly, I am interested in how certain semiconductors respond to having light shined on them. If I am interested in what happens the direct instant after excitation then I need to use a spectroscopy of the first type—I need to know the electronic structure of the semiconductor. Conversely, if I am interested in thinking about how charge flows through a semiconductor as function of time, this is a question the second type of spectroscopy can answer.

In general, as a spectroscopy becomes more intricate, it can be used to unravel more complicated puzzles. This is a direct analog to what you know about multi-dimensional NMR. Scientists build intricate pulse sequences to hash out certain solvent effects on particular atoms in a sample. Optical spectroscopists build time orderings, phase-matchings, and frequencies of electric fields to work in an ensemble to hash out a complicated puzzle.

I would like to offer a word of caution. I don’t want you (or others) to think “simple” or “old” measurements like 1D absorbance are not useful or interesting. A wealth of information is buried in an absorbance spectrum. Moreover, it is not trivial to get a good absorbance spectrum on non-traditional samples like semiconductor thin-films (what I work with). Single-dimension and multi-dimensioned spectroscopies all give important and complementary information about a sample. They are all tools in the scientist’s toolbox that she can use to unravel a complex system.

I hope you have had a wonderful start to a prime year.

Darien

Electrons Sloshing About: Resonant Versus Nonresonant SFG

Good afternoon, Darien!

Today, I would like to just briefly nerd out concerning a spectroscopy technique that is tremendously cool.  I have spent a lot of time over the past month shadowing Shuichi (a postdoc in my group) in order to learn how to operate his laser setup.  I find myself continuously asking “What is this optic?” or “Why did you move that optic?”, and the answers I get are usually pretty straightforward “That is a mirror” and “I moved it so that the beam would pass through this iris” and (increasingly) “I have told you this several times before.  Perhaps you should take notes this time”.  A while back, though, I got an answer that was really neat, and that is what I would like to share with you in this post. Before I dive into the glorious details, I should lay a bit of a background.

The setup we were working on was our sum frequency generation (SFG for short) setup.  Just in case somebody besides you reads this, I suppose that I should elaborate.  Sum frequency generation is a process in which photons of two different colors of light (red light and infrared light in my case) combine their energies to create a new photon with an energy which is the sum of the energies of the two initial photons.  This is a fascinating thing, because photons ordinarily don’t interact with each other at all.  You can point two flashlights so that their beams intersect, but the beams will continue right on without any combination, collision, or other observable change.  In order to make photons combine with each other, you need two things: VERY high density of photons, and a crystal or reflective surface to shine the light on.  Thus, like any other SFG setup, ours consists of a visible beam and an IR beam which both strike a surface (gold or platinum) at exactly the same position at exactly the same instant.1  After this, any SFG photons that are created go to our detector, and we see a nice spectrum of intensity versus photon frequency, with the center frequency lying at the sum of the frequencies of the red and IR beams we used (by beams, I of course mean pulses).

SFG works by a two step process.  First, the electromagnetic field of the IR beam sets up a sort of vibration of the electrons in the material being used, and then the visible beam strikes the surface.  The visible beam picks up energy from the vibrating field of the electrons that are sloshing about2 in response to the IR beam, which we observe as an increase in the frequency of some of the photons in the visible beam.3 Now, in practice, this doesn’t look like a two step process most of the time.  The electrons can dump their energy into the visible beam at pretty much the same instant that they are made to slosh about by the IR beam.  In fact, the electrons in a metal will lose all of their sloshy vibrational energy within something like 100fs, so if you are trying to get an SFG signal from a metal surface, you must make sure that your visible beam arrives at pretty much the same time as your IR beam.

We use a gold mirror for our SFG alignment, and we carefully adjust a delay stage while watching the signal from our detector.  When we see that the SFG signal is maximized, we are happy, because we have found “delay zero”, that marvelous delay stage position that makes the visible and IR beams arrive at the same instant in time.  But measuring SFG from gold surfaces isn’t our ultimate objective.  We want to measure the SFG signal from a carbon monoxide (CO) layer adsorbed onto a platinum electrode.  It turns out that carbon monoxide really likes platinum (just as it really likes hemoglobin), so it is pretty easy to get it to stick to the surface in a nice layer.  It also turns out that when CO absorbs IR light, the electrons in its molecular orbitals vibrate quite nicely.  Unlike the sloshy vibrations in a metal, though, these are nice regular oscillations, that are said to be “resonant”.  To distinguish between this sort of excitation and that in metals, we call processes “non-resonant” if they involve the latter type of excitation.

Now many of our experiments thus far have been using CO as a probe to learn about the conditions at the surface of an electrode when a potential is applied, so after using a gold mirror to find delay zero, we exchange it for a platinum mirror covered with an adsorbed layer of CO.  Once this is in place, we can look at our signal and see a really big SFG peak.  This makes us happy, because everybody likes a high signal to noise ratio.4 To celebrate, we do something very strange: we go back to the delay stage and we move it to delay the arrival of the visible beam until hundreds of femtoseconds AFTER the IR pulse.  This makes our glorious SFG signal go down! Our signal to noise ratio is still good, but it has taken an undeniable hit.  When I first saw Shuichi do this, I was rather distraught and demanded an explanation.  “Why did you move the delay stage???” I asked.  “You decreased the signal!!”.  And then Shuichi proceeded to give me the fantabulously cool explanation that I am about to share with you.5

As you might expect, not all of the IR light is absorbed by the CO layer.  Lots of it passes right through and excites the electrons in the metal, making them slosh about.  Thus, if the visible beam is made to arrive at the same time as the IR beam, the SFG signal that it generates will be a combination of the nonresonant signal due to the electrons in the metal, and the resonant signal due to the electrons in the CO molecular orbitals.  This is not good! Our data is hard enough to interpret already.  I don’t know what we would do if we had to account for some unknown mixture of signals from the CO and the platinum surface beneath it.  Happily, everything is resolved by delaying the visible beam.  Although the nonresonant excitation of the electrons in the metal is very short lived (as already stated), the resonant excitation of the electrons in the CO molecular orbitals is much longer lived.  Thus, if we delay the visible beam by a few hundred femtoseconds, the nonresonant excitation will be gone before the visible pulse arrives, but the resonant vibration of the CO bonds will be almost as strong as ever.  In this scenario, the only SFG signal that can possibly result is the resonant SFG signal from the CO!  We may sacrifice a little bit of our resonant SFG signal, since some of the CO vibrations will decay slightly while we wait for the nonresonant excitation to dissipate completely, but it’s totes worth it!

Have a marvelous afternoon!

Footnotes:

  1. It turns out that it isn’t always necessary to arrive at exactly the same instant, and this will be elaborated on in a couple of paragraphs.
  2. This phrasing was used by Dana when discussing these things with me.  I like it so much that I will proceed to use it repeatedly throughout this post.
  3. I am told that the interaction with the IR is a “polarization” sort of interaction, while the second step is a “Raman” sort of interaction, but I didn’t understand fully enough to elucidate that in more detail.
  4. No citation is needed for this statement, as it is common knowledge.
  5. The next paragraph is what I really wanted to share.  It may sound a bit anticlimactic, though, because I was compelled to spread much of the coolness of it throughout the paragraphs of background information.

On Causation, Entropy, and Semiconductors: Friday Night Persnicketiness

Alexander,

Let us talk about a two things that interest me: causation and thermodynamics.1

I was reading a paper today that discusses the frontiers of current research in how charge transfer/separation in some two dimensional semiconductors/heterostructures works. The whole conversation in this micro-field is around the question of how we can build solar cells that are more efficient. To build these efficient solar cells we must be able to have electrons get ripped away from their happy positions in their material homes so that they can flow through wires. In reading this paper the following phrase caught my attention:

“Such an increase in DOS [density of states] with r [distance] provides an entropic driving force for charge separation.”

I would like to point out something a bit interesting in this sentence. The authors of this paper say that entropy is driving something to happen. Entropy is cast as the cause of an event. This type of language is ubiquitous in chemistry—especially in general chemistry type settings where we ask our students such questions as: “is this process entropy or enthalpy driven?”. There is something curious here, though. I made it very clear to my students when we discussed entropy that keeping track of the total entropy of the universe allows me to know if a process is possible, but it doesn’t tell me if the process will actually happen on any reasonable timescale. The common way of thinking of this goes something like: thermodynamics tells us what is not forbidden, but it doesn’t tell us if the things will actually happen.

So why were these authors so ok with saying that the entropy change of the process was driving the event? This is where the language (and thinking) of chemists (and maybe scientists/humans in general) becomes so weird. We oftentimes have a hard time correctly saying which things cause other things and which things are merely necessarily connected to the change.

I don’t think the authors are correct in saying that entropy drives charge separation. Rather there must be some sort of other physical mechanism(s) at play that cause electrons to get ripped from their positions when I shine light on a solar cell. This notion is because of something that I think is rather fundamental: thermodynamic state functions allow us to keep track of things and events, but they don’t force those things to happen. Now let us cast this statement in an analogy.

Let us think of my bank account. It is currently populated with some amount of currency. If we wanted to we could track the time evolution of my bank account by monitoring the amount of currency. Now, my paying of rent is necessarily connected with a decrease of the amount of money in my bank account. Additionally, if I have no money in my bank account I cannot pay my rent. In this way, the money in my bank account is kind of like the thermodynamic entropy change with respect to a process. I hope we can both agree that my rent being paid is not caused by or driven by the fact that my bank account has money in it. The cause of me paying my rent has its very start at my desire to continue living in my house next month. This desire won’t be fulfilled if I don’t have money in my account (that is if the entropy increase is not positive then the reaction won’t happen), but it is not caused by money being in my account! Rather the money being in the account is a necessary condition for the event to happen, but it is not a sufficient condition to be the cause of my rent check getting written.

The very notion of a cause now comes into play. How do I know when one event causes another? The hardcore Humean/empiricist will of course say that we can never ever know for sure that one event causes another, all we can know is that one event always (to our best knowledge) directly follows another event. But us pragmatic chemists that actually desire to help humanity probably shouldn’t spend all of our time thinking in this way. We want to be able to build semiconductors that make good solar cells after all. So we should probably be okay with saying some things cause other things (like billiard balls smacking into each other) because we want to be able to use these causes to inform our decision in solar cell fabrication.

To say it again, the pure empiricist doesn’t want to ever say anything causes something else. They are much happier saying that events are correlated in some way. But we humans want to 1) build concrete associations between events and 2) build solar cells. We could just build solar cells based on massive amounts of known correlations. But this makes us sad, we like thinking that we are making solar cells informed by our proposed mechanisms concerning how charge transfer happens at an interface.

It is now apparent that we scientists are all screwed up. We want to build things, but we aren’t quite absolutely sure how to build the perfect solar cell. Moreover, using our current method of inquiry we can never be sure of how to build the perfect solar cell. We can’t even know if a solar cell connected to my battery is really the reason my battery is charging… oh dear. I guess we are stuck using words like ’cause’ and ‘drive’ in an informal and imprecise way. We must be very careful with these words, though. As always, we should never confuse cause and correlation; we should never confuse the fact that a room always gets messier with the actual cause of the room getting messier—the toddler writing on the wall.2

-Darien

And now the footnotes:

  1. For those that have not taken a chemistry class in a while, the carefully defined notion of a ‘spontaneous process’ is important to have in one’s back-pocket for this post. Please see the possibly esoteric Wikipedia page on spontaneous processes. I apologize for this post going against the originally stated rules of AlkynesofPi, but it seems like we violated those within a few weeks of inception.
  2. Now, Alexander, after the above rant about causation in semiconductors I assume you have now found a little flaw in my dislike of the authors’ usage of the notion of an entropy driven process. You should be saying to me: “But Darien, maybe entropy IS actually a cause. You don’t know. It seems to be really well correlated to lots of events.” To this I sheepishly respond, the notion of entropy causing anything is rather messy. It seems like a universal truth that any change that occurs is accompanied by an increase in the total entropy of the universe. So, the authors’ noting of the entropy increase of the system and the event happening being correlated is nothing new. This is such a not-new thing that we have a universally accepted name for it — the Second Law of Thermodynamics. This not-newness maybe makes it useless in rational design of solar cells. But the flaw in my original peeve is still evident, this entropy change may indeed be the cause, I simply don’t (can’t?) know.

 

When Lower Quality Is Better: The Splendidly Sneaky Art Of Q-Switching

Good afternoon, Darien. 

As you know, I work with a Ti-Sapphire ultrafast laser system.  I suppose that to be precise I should say that I work with two such systems: one for each of my projects.  While we are being precise, though, one might call into question my usage of the term “work” here.  I shall move on quickly without addressing this, and hope that nobody notices.  Anyway, my point is that my lasers generate femtosecond pulses using a cool process called mode locking.  You are familiar with this, I am quite sure.  We are both also aware that there are other lasing mechanisms that are capable of producing pulses.  One of these is the Q-switch, which produces very high power pulses that are not nearly as short as the pulses we use in our research.  Shortly after joining the Dlott group, I was given an introductory book on lasers, which provided a brief overview of the mechanism of laser action various pulsed lasers.  This book contained a brief explanation of various types of pulsed lasers, but I found myself quite confused by it.  I asked Will Shaw (alias “Old Will”) to explain this material to me, and he kindly obliged.  What follows is my attempt to reproduce his excellent explanation of Q-switching.  He actually uses a Q-switched laser, by the way.  The high power apparently comes in handy for blasting aluminum foil pellets at panes of glass.

As you know, lasers have three important parts.  The crystal which produces the light, the pump which energizes the crystal so that it can produce light, and the cavity, which stores the light so that it passes through the crystal repeatedly.  the “Q” in “Q-switching” stands for “Quality”, and the quality being referred to is the quality of the cavity.  In this context, a high quality cavity is one that stores light very efficiently, while a low quality cavity is one that either absorbs the emitted light or lets it escape without being reflected back through the crystal.  For most types of lasers, it is uber important to have a high quality factor, since you need the beam to pass through the crystal many times before population inversion is achieved.  In Q-switching, though, you want your laser quality to be very low most of the time, and only become good occasionally.  This could be easily accomplished by sticking an aluminum block in front of one of the cavity end-mirrors, in order to absorb any light that came from the crystal.  When the block is in place, the quality factor would be low, but it would “SWITCH” to high quality the moment you removed the block.  Thus the term “Q-switching”.  In practice, there are more sophisticated ways of doing this, but the idea is the same.

If I was reading this, I would have become confused and given up during the past paragraph, because it kept going on about Q-switching, but never explained how a low quality factor can allow for lasing to happen.  I will try to remedy that here.  It turns out that Q-switching works by achieving a VERY inverted population in the crystal.  Not some wimpy, ordinary population inversion where you barely have more excited state species than ground state species.  I’m talking about the vast majority of the species being excited.  Most lasing mechanisms don’t allow for such a state to be achieved, because as soon as you get a little bit of population inversion, you start building up an increasingly powerful pulse train that “sweeps out” the excited state species from the crystal via stimulated emission.  In Q-switching, though, you don’t have any pulse train, because you have made the quality super low.  You are pumping your crystal, and some of the excited species in the crystal undergo spontaneous emission, but the fluorescence generated this way gets absorbed once it leaves the crystal, rather than reflecting back to be amplified by stimulated emission.  Sure, a photon spontaneously emitted in the center of the crystal might cause a little stimulated emission in the process of exiting the crystal, but spontaneous emission is rare enough that the population inversion isn’t substantially depleted by this process.  With no way to leave (other than rare spontaneous emission events), the energy pumped into the crystal just builds up.  The population inversion gets more and more extreme.  When it finally approaches a maximum, you are ready to fire your pulse!  You remove the aluminum block (or activate your sophisticated mechanism for switching the quality) and PRESTO! Nothing happens.  At first.  It takes a bit for the stimulated emission to start building up.  Once it does, though, you get a rapid, exponential growth of the pulse power in the cavity.  All that energy stored in the form of an inverted population gets swept out as a tremendously powerful pulse, which you release from the cavity as soon as the population inversion is depleted.  Then you start all over again.  I find it terribly cool that you can store up energy in a crystal this way!  The idea of a nearly complete inversion of population is enough to plaster a sloppy grin on my face.  I hope you find this as cool as I do.  If you don’t, I’ve clearly communicated it poorly.

If you wanted to generate high power pulses using another method, you would run into trouble.  If I remember correctly, maintaining a really powerful pulse train in the cavity ends up damaging it… or something like that.  I think that the advantage of Q-switching here is that the pulse is only very powerful for a brief instant (a few trips through the cavity) before it is released.  I’m not certain of this, though.  I should probably read up on it some more.  But then, there are many things that I should read up on, and I should be reading some of them right now.  I’m afraid I have dashed this off rather quickly, so please enquire concerning anything which is unclear. 

Have a fabulous afternoon!

-Alexander

 

 

Usage of the word “electron” through time

Good evening Alexander,

It is Saturday. My tome concerning Gibb’s Paradox and Entropy is not finished yet, so here is a brief rambling concerning something that I have spent the last few hours playing with.

From Wikipedia:

In 1874 Irish physicist George Johnstone Stoney suggested that there existed a “single definite quantity of electricity”, the charge of a monovalent ion.

It wasn’t until 1894 that Stoney coined the term electron (a combination of the words electric and ion). Now this is according to Wikipedia’s, the ultimate source on everything, article concerning the electron. However according to the Oxford English Dictionary, OED, this original usage actually happened in 1891 (interestingly enough, Wikipedia cites the OED for their information—which is different by 3 years. Someone is wrong. Maybe Wikipedia isn’t so amazing…). The work, according to the OED, where the word electron first appears is as follows:

J. Stoney in Trans. Royal Dublin Soc. 4 583:   A charge of this amount is associated in the chemical atom with each bond…These charges, which it will be convenient to call electrons, cannot be removed from the atom; but they become disguised when atoms chemically unite.

But let us be a little curious. What happens if we do a Google Ngram search on the word ‘electron’? To do this right we also need to do it for the German form of the same name ‘elektron’. A Google Ngram search is pretty cool, one of the things it tells you is the normalized frequency of words published in certain years as a percentage of apparently all the words published in that years (NB: this is actually a bit of a simplistic way of explaining what Google is doing). When you do that you see the following graph (this link takes you to an interactive version of the graph).

Ngram Viewer. Electron 1700-2009.
Ngram Viewer. Electron 1700-2009.

We see a big rise in the usage around 1900 (when quantum mechanics was nascent) and it peaks around the 1960s (when QED was pretty much hashed out). But we also see some interesting blips in the graph around 1713 and from around 1790 to 1820. What are these blips? Well let us look at plots from over those time periods.

Ngram Viewer. Electron 1700-1750.
Ngram Viewer. Electron 1700-1750.
Ngram Viewer. Electron 1750-1850.
Ngram Viewer. Electron 1750-1850.

According to Google there is an English spike in usage of the word electron in 1713, a German spike in 1793, English in 1801, German in 1809, and German again in 1819. Then Google reports peace on the electron front until 1899 (eight years after what OED reports to be the first usage of the word electron). One thing to note here is that this Google search is only good for books that google has perused and scanned. It may not be any good for scientific journals—I don’t know what all Google has perused.

It seems as if the word electron was used before OED and Wikipedia say. Moreover, when it was first used in book form it was used when the theory of electricity did not have a particle being responsible for electric charge, current, attraction, and repulsion but a fluid (or fluids for some theories)—the idea behind the word electron (an electric ion… think electric particle) hadn’t really existed in 1712 or in 1801.

Now this may just be noise in Google’s algorithms or scanning methods, but if it isn’t I am rather interested to know what the books are that provided those spikes in usage. Additionally, it seems very unlikely that noise in the data would happen for both the English and German forms of electron in the same rough time period.

I attempted to figure out how to coax Google books to tell me what books those words came from, however, I wasn’t able to succeed. Moreover, I don’t know how to even go about a quest to find out what works those words came from; so it seems as if my inquiry dies here.

I look forward to reading your post concerning permittivity and the likes—we both know I didn’t actually fully read any of the links that you posted. That would take all the fun out of you doing all the work behind understanding them.

—Darien

As a brief post script:

The below plot is  identical in terms of original data content as the first plot in this post. However, all of the data is smoothed. You will note that there does not appear to be any usage of the word electron before the late 1870s. Thusly the interesting data spikes that formed the idea behind this post are easily smoothed into non-existence. We may use this as a great example as to the dangers of the smoothing of data.

Ngram Viewer. Electron. Smoothed. 1700-2009.
Ngram Viewer. Electron. Smoothed. 1700-2009.

Permittivity and Frustration

Good evening, Darien.

It’s Saturday.
Forty eight hours ago, my goal for this post was to elucidate the concept of electrical permittivity.  I have since given up on that goal.  Permittivity, although it seems delightfully simple, has proved frustratingly difficult to get my head around.  I feel that I have greatly improved my understanding as a result of my efforts, but I am still quite far from a full understanding.  There are many sources available online which explain these things far better than I can as of now.  I plan to continue to investigate this, and hopefully come up with a worthwhile discussion in a couple of weeks.  Until then, I will share some of the websites that I have found most helpful.

This article is focused on permittivity.  Quite helpful.  Links e with E,P, and D

This article is initially helpful, but then goes too broad.

Here we have a brief application of permittivity to antenna theory.

SPLENDID!!! This and this answers the question “Why isn’t e0 = 1, since nothing can be more empty than a vacuum?”

Until next week,

-Alexander

Inaugural Post: Rules and telos (or lack thereof)

Good afternoon, Alexander, it’s Sunday.

As the inaugural poster for AlkynesofPi (Note to us: we need to decide how we want to stylize this moniker) I have been beset with many wonderful possibilities for the first posting to set the tone of our blog. I’ve considered posting about electromagnetic pumps; I even printed off a bunch of papers concerning pumps, of course, I promptly did not read them—as per my usual style. As of yesterday I was thinking greatly about a question that was posed to us in a physical chemistry exam, so I considered writing about the mixing of ideal gases and how entropy is a bit of a tenacious and possibly pernicious idea. Alas none of those actually happened. Why? I’m being lazy, of course.

So instead of a snarky review of Gibbs paradox or a discussion of how awesome engineers are for designing electromagnetic pumps you get to read a terribly uninteresting post concerning the ideas and guidelines behind AlkynesofPi.

Firstly, there are no rules. If there were rules then the first rule would HAVE to be one cannot post about something uninteresting, clearly I am violating that rule, and a violation of rules sets a bad tone to a highbrowed blog, no? So, that must be the first rule—that there are no rules.

Second rule: posts are to be made on alternate weeks by each of us with a week starting on Sunday and ending on Saturday. In other words, I posted this today, Sunday. Upon next Sunday you have until the Saturday following that Sunday to post. If you don’t post within that time period, you will forever be shamed as the first-poster-to-not-post.

Thirdly, citations are a good idea. Endnotes, maybe? However, there is no rigid formatting that needs to be followed for instance, you may merely provide a link to the wiki page from which you pulled your information. Consistency is a good thing, though.

Fourthly, esotericness should be kept to a minimum. Posts that are esoteric should be flagged as such. For instance, if I had a post concerning the Franck-Condon principle and I used the notion of an overlap integral without explaining what an overlap integral is, then I ought to title the post along the lines of “Franck-Condon Principle and all that Jazz: Pchem Nerdiness Required”. But, in general, posts should be written to an intelligent audience that has not spent the last four years studying chemistry, physics, and mathematics.

This moves us nicely into discussing the reason behind this blog. I think the telos of this blog lies in our betterment and enjoyment. I hope that by writing bi-weekly, each of us will increase our written communication skills, become better at explaining concepts, and when that fails, pull our hair out concerning how persnickety of a concept pretty much anything is when you try to elucidate it.

It is in the communication of truth(iness?), be it science, philosophy, theology, or art, that the life of this blog will be. This communication will be hard, much harder than conveying the rigid information of an experiment in a journal. Hopefully it is worth the time.

I leave you with one of my favorite poems by Emily Dickinson—with all of her em dash wizardry.

Tell all the truth but tell it slant —
Success in Circuit lies
Too bright for our infirm Delight
The Truth’s superb surprise
As Lightning to the Children eased
With explanation kind
The Truth must dazzle gradually
Or every man be blind —

Until next week,

Darien