Why the quality of teaching programming is so bad

A friend asked why her R statistics programming course on a MOOC was so terrible. She said 90% of the information on the quizzes was in the lecture, but the other 10%? Left for you to discover on your own.

Welcome to the problems I am struggling with. I am now a programming teacher, in most ways that matter. My newest job is about half research and half teaching. What you’re finding is completely the norm, and in fact I’d say 90% is pretty good. Sad facts.

The terrifying status quo is that we have a sixty year old field, one that started as self teaching during the early years, some very smart mathematicians and electrical engineers ended up figuring out how it can and should work, but the early perception was that designing programs was the hard part, conceiving of the math to represent them, and the actual programming the math into the computer was a technician’s job. (Notably, programmers were usually women. Program designers and system architects were usually men. This turns out to be relevant.)

As the field started to grow, programming started to be recognized as requiring the bulk of the problem solving skills, since efficiently encoding mathematics, where a symbol might mean “with all values from zero to infinity” into a computer with only thousands of words of memory took clever reworking of problems. The early work was largely uncredited, mere “entering the program into the computer”.

In the late 70s and into the 80s there was a land grab for the prestige of being a programmer. A new labor category of “ software engineer” was created, a professional engineering job, not the mere technician of being a programmer. Women were excluded from programming, sometimes deliberately by male programmers, sometimes as a matter of practice by engineering schools.

With this shift, programming became a field where training on the job, expecting no familiarity to begin and a few established training programs was replaced by engineering school, and assuming that the discipline is a field of either mathematics or of electrical engineering, and programming courses became upper division electives for engineers working largely in theory. All of this is counter to the people (particularly Margaret Hamilton) who started trying to make software engineering a discipline, but the gestalt of the industry has definitely gone away from teaching being valued.

The net effect of that shift is that the pedagogy of teaching programming was interrupted.

A few training programs remained, but usually tied to industry, and particular companies. The industry balkanized significantly in this period, so IBM would teach IBM programming, and Oracle would teach Oracle programming. The abstract skills of programming are highly portable between languages and fields, but at the raw syntax of a given programming language, the details matter.

Now, another relevant thing is that computers have been sustaining a tremendous pace of development for these sixty years. With a roughly doubling in computation of a chip every 18 months, there have been significant periods where practices would be introduced and thrown away and replaced much faster than the cycle of getting a student through college and in to an adjunct professor’s seat. What they were taught as an entry level student is no longer used, or is wrong in some way if they go to teach that knowledge by the time they’re in a position to assist a teacher or teach themselves.

Both of these have caused most programming teaching to avoid specifics and to only teach the most abstract portions, the parts that will have a longer shelf-life than the details, and to avoid being entrenched in only one part of the industry.

Some schools are finally climbing their way out of this – MIT now teaches Java, an industrial rather than academic language instead of the prior Scheme language, and some European software shops are starting to use Haskell, which started as an academic language, so the crossover is finally happening, but it’s a slow process.

It’s all screwed up. Specifics of systems are needed to actually learn and build things, but the academic process is largely in abstract terms, and bridging that gap is difficult. On top of that, there’s the notion that some people are inherently good at programming, probably derived from similar thoughts about math, so there’s a certain impatience for explaining of, and arrogant derision for people who don’t know the details.

So what’s someone to do?

At this moment, programming specifics are usually peer-taught, so working with people who’ve worked with the specific system and can advise about the syntax and specifics is important. Even in industry, this is recognized, if informally by the practice of ‘pair programming’. Seek classes that get the details out, not just the theory. It will be a mixed bag, but there are good classes out there – just know that ‘good teaching’ of programming is not something systematically understood, and not universally valued.

Creating just online social spaces

The last two months have seen two Slack chats start to support marginalized groups in the technology field, LGBTQ* Technology and Women in Technology, and we’ve had a lot of discussions about how to run the spaces effectively, not just being a place for those who it says on the tin, but to support, encourage and not be terrible to people who are marginalized in other ways than the one the particular group is trying to represent.

This is a sort of how-to guide for creating a social Slack that is inclusive and just, and a lot of of this will apply to other styles and mediums for interaction.

The problem begins thus: How do you keep a Slack started by a white gay cisgender man from reflecting only that as a core group? How do you keep a women in technology chat from being run entirely by white women of (relative) affluence afforded by tech industry positions, leaving women of color, trans women, people with disabilities out in the cold?

Making just social spaces is not a one time structural setup, though things like a good Code of Conduct is an important starting place, and there are difficult balances to strike.

Make sure there is sufficient representation. Social spaces grow from their seed members, and as it’s been studied, people’s social networks tend to be racially and genderwise insular; White members beget more white members; men bring more men, especially in technology as we’ve found. If a space is insufficiently representative of the diversity of experiences that should be there, people will leave, having seen yet another space that isn’t “for” them. So, too, power structures reflect the initial or core body of a social group, and a social group will tend to reflect the demographics of those in positions of power, creating a feedback cycle that will be hard to break without a lot of effort. Seed your network as broadly as you can, and put people without homogenous backgrounds in power.

Empower a broad group. A few admins can’t guide and create the shape of the space alone, so empower users to make positive change themselves.

Plan for timezones. If your chat starts off with US users, you will find that they will dominate the space during US waking hours. You may find an off-peak group in Europe, with an almost entirely separate culture. Bridging the gap with admins in other timezones to help consistently guide the shape of the group can be helpful.

Your users will have reactions to media posted. In particular, seizure disorders can be triggered by flashing animated GIFs. Building an awareness into your social space early can help make sure these are not posted or restricted to certain channels. Likewise, explicit imagery, upsetting news and articles can be marked or restricted, even without banning it entirely.

Plan for how to resolve conflicts. While outright malicious violation of a Code of Conduct can be solved by ejecting members, most cases of conflict are more nebulous, or not so extreme nor malicious that a first offense should involve removal from the space. Slack in particular has let the LGBTQ* Tech group practice a group form of conflict resolution. We created a #couldhavegonebetter channel. When a conversation strays off the rails, into vindictive, oppressive by a member of a relatively privileged group, or evangelizing views that make others uncomfortable, a strategy that has worked well is to end the conversation with “That #couldhavegonebetter”, force-invite the users involved into the channel, and start with a careful breakdown of how the discussion turned problematic. This gives a place to discuss that isn’t occupying the main space; those who care about conflict resolution can join the channel. It’s not super private, but it’s equivalent of taking someone aside in the hallway at a conference rather than calling them out in front of an auditorium full of their peers. De-escalation works wonderfully.

Keep meta-discussion from dominating all spaces. It’s a human tendency to navel-gaze, doubly so in a social space, where the intent of the members shapes the future of the space. That said, it can dominate discussion quickly, and so letting meta-discussion happen in channels separate from the thing it’s discussing can keep the original purpose of channels intact.

Allow the creation of exclusive spaces. Much of the time, especially socially, marginalized people need a place that isn’t dominated or doesn’t have the group who talks over them most: people of color need to escape white people, trans people need to escape cisgender people, people outside the US need space to be away from American-centric culture and assumptions, and not-men need to be able to have space that is not dominated by men. It has ended up being the least problematic to allow the creation of spaces that are exclusive of the dominant group, just to give breathing room. It feels weird, but like a slack focused on a marginalized group as a whole, sometimes even breaking things down further lets those at the intersection of multiple systems of oppression lighten the load a bit.

A chat system with a systemwide identity has different moderation needs than one that does not. A problem found on IRC is that channels are themselves the unit of social space allocation. There is no related space that is more or less intimate than the main group, and so conversations can’t be taken elsewhere, and channelization balkanizes the user group. With Slack, this is not true. Channels are cheap to create, and conversations can flow between channels thanks to hyperlinks.

Allow people to opt out generally, and in to uncomfortable or demanding situations. A great number of problems can be avoided by making it possible to opt out without major repercussions. Avoid lots of conversation in the must-be-present #general channel, howver it’s been renamed. (#announcements in one place, #meta in another). Default channels, auto-joined by new users should be kept accessible. Work-topical channels should be kept not-explicit, non-violent spaces, so they are broadly accessible. Leave explicit imagery in its own channels, let talk about the ills of the world be avoided. And keep the volume low in places people can’t leave if they’ll be in the Slack during their workday.

Good luck, and happy Slacking!

Why MVC doesn't fit the web

A common set of questions that come up on IRC around node web services revolve around how to do MVC “right” using tools like express.

The short answer: Don’t.

A little history. “MVC” is an abbreviation for “Model, View, Controller”. It’s a particular way to break up the responsibilities of parts of a graphical user interface application. One of the prototypical examples is a CAD application: models are the objects being drawn, in the abstract: models of mechanical parts, architectural elevations, whatever the subject of the particular application and use is. The “Views” are windows, rendering a particular view of that object. There might be several views of a three-dimensional part from different angles while the user is working. What’s left is the controller, which is a central place to collect actions the user is performing: key input, the mouse clicks, commands entered.

The responsibility goes something like “controller updates model, model signals that it’s been updated, view re-renders”.

This leaves the model relatively unencumbered by the design of whatever system it’s being displayed on, and lets the part of the software revolving around the concepts the model involves stay relatively pure in that domain. Measurements of parts in millimeters, not pixels; cylinders and cogs, rather than lines and z-buffers for display.

The View stays unidirectional: it gets the signal to update, it reads the state from the model and displays the updated view.

The controller even is pretty disciplined and takes input and makes it into definite commands and updates to the models.

Now if you’re wondering how this fits into a web server, you’re probably wondering the same thing I wondered for a long time. The pattern doesn’t fit.

On the web, we end up with a pipeline something like “Browser sends request to server, server picks a handler, handler reads request and does actions, result of those actions is presented to a template or presentation layer, which transforms it into something that can be sent, which goes out as a response to the browser.”

request -> handler -> presentation -> response

It still makes sense to separate out the meat of the application from the specifics of how it’s being displayed and interfaced to the world, often, especially if the application manipulates objects that are distinctly separate. A example might be that an accounts ledger makes no sense to bind the web portions to the data model particularly tightly. That same ledger might be used to generate emails, to generate print-outs, and later to generate reports in a completely different system. The concept of a “model” or a “business domain logic” layer to an application makes sense:

request -> handler -> presentation -> response
^
|
v
business logic

But some time in the mid-2000s, someone thought to try to shoehorn the MVC concept into this pipeline, and did so by renaming these components:

request -> controller -> model -> view -> response

And this is why we end up with relatively well-defined models, since that makes sense, and ‘views’ are a less-descriptive name for templating and presentation logic. What’s left ends up being called a ‘controller’ and we start a lot of arguments about whether a given bit of logic belongs there or in the model.

So in express, let’s refer to models and domain logic, to handlers and to templates. We’ll have an easier time of it.

Handlers accept web-shaped data: query strings and post data, and shape them into something the business logic can deal with. When the business logic emits something we should display, that same handler can pass it off to templates, or in the case of data being rendered in the browser by a client there, serialized directly as json and sent off as the response. Let the business logic know little about the web, unless its concern is the web as in a content management sytem. Let our handlers adapt the HTTP interface to the business logic, and the responses out to our presentation layer, even if that’s as simple as filling in values in a template.

We’ll all be a lot happier if MVC keeps its meaning as a paradigm for breaking up responsibility within a GUI.

Updated: read part 2, Why MVC doesn’t fit the browser.

Design Ethos

I just realized that my entire software design ethos is ‘power to the people’.

I started to argue over whether an interface (one that modifies some mutable object, however unfortunate it is) should no-op, throw an exception, or warn when it’s already been done once and runs again.

To no-op is to say “we know better than you and will do what we consider the Right Thing”.

To throw an exception is to say “we know better than you and will make you do what we consider the Right Thing”.

To warn the developer using the module is to say “we have more experience here, and say what we think … but your call. Go for it!”

A social software toolbox

Rate Limiting can be implemented as a way to deter high-cost actions, whether the cost of technical details like API calls, or socially expensive like posting comments, where one or two is easy to keep up with, but many can be a burden on the receiver. Well chosen, they can be invisible to users who are not actively being malicious; poorly chosen or bound to technical rather than social concerns, they can be arbitrary and frustrating limits.

Tarpitting is adding rate limits that are just not satisfiable to a malicious user, frustrating them into giving up.

Delay can be a mild form of rate limiting that makes users who are overwhelming the system or other people experience the system as slower and less pleasant to use.

Blocking most often makes users invisible to each other. In the case of public postings, it usually means that one user can’t share the other’s postings or otherwise interact with them, though they can see posts.

Muting simply ignores an undesirable user’s posts.

It’s interesting to note that more marginalized people prefer to block, and less marginalized prefer muting. There are a lot of subtle dynamics in these interactions. Given a private backchannel that doesn’t respect blocking, blocking a user will cause a harasser to escalate privately.

Penalty box is a timed block, shadowban or teergrube that expires, giving users time to cool down. When under a user’s control, can help separate bad actor blocking from merely not wanting to deal with someone at the current time.

Private backchannel can allow someone who wishes to connect a way to do so without being public, but can also allow a harasser to privately act poorly while maintaining public good standing. Direct messages are Twitter’s backchannel; replies to author only are a mailing list’s backchannel.

Privacy groups The permission model of Livejournal, posts can be restricted to a single privacy group (a list of users) and only viewed or shared within that group.

Friending is initiating a symmetrical relationship, complete only when confirmed by the other party.

Open follow is initiating a one-way relationship, usually expressing interest by the follower in the followee.

Approved follow is initiating a one-way relationship, as in open follow, but requiring the followee to approve the action, as in friending.

Private account is disabling public visibility of the posts in an account, usually making them vet followers as in approved follow.

Upvote/Downvote are a popular way to weed out chaff from a conversation, where offtopic, rude or poorly written comments are downvoted by a community, and popular, funny, or insightful comments are upvoted. It can be problematic when the culture of a community itself reinforces poor choices, and it’s subject to gaming via social campaigns.

Reflection is the act of restating a comment when replying to it. Requiring a commenter to first restate and reflect what the original poster said before posting their reply is an interesting way to try to suppress flame wars of misunderstanding, and also increase the expense of malicious comments. I know of no system that has ever implemented this, but it was proposed by @RebeccaDotOrg and I think it’s a fantastic idea for debate where actual exploration or consensus on a hot issue is interesting.

Shadowbanning is redirecting a malicious user to a dummy version of the site to interact with where their actions will never be seen by real human beings. Often combined with tarpitting or ratelimiting.

Sentiment analysis is a way to automatically try to ascertain whether a comment is positive or negative, or whether it’s inflammatory, and whether to trigger some of the other countermeasures.

Subtweet is commenting in a chronologically related but not directly connected conversation. A side commentary, usually among a sub- or in-group.

Trackback is automated notification to an original post or hosting service when a reply or mention is generated on another site.

Flat commenting is the form typically used by forum software, where posts are chronological or reverse chronological below a topic post.

Threaded commenting is used in some environments like Reddit, Metafilter, Live Journal and some email clients where each message is shown attached to the one it replies to, giving subtrees that often form entirely different topics.

Weakly threaded commenting Threading only shown for conversation entries from followers. Often implemented client-side, given an incomplete reply graph.

Real identity can cause some commenters to behave, particularly in contexts associated with their job.

Pseudonymous identity can give stability to conversations over time, showing that the same actors are present in conversations. If easy to create more identities, can yield sockpuppeting.

Anonymous identity can create a culture of open debate where identity politics are less prominent, but can let some people play their own devil’s advocate and can launch completely unaccountable attacks.

Cryptographic identity are interesting in that there is no central authority, and they can often not be revoked (there’s no way to ban an identity systemically without cooperation). Cryptographic names are often not human-memorable, thanks to the constraints of Zooko’s Triangle. It’s possible to work around, but the systems for doing so are cumbersome in their own right.

Invites are often used to make sure that the social group grows from a known seed; because social networks are often strictly divided by race and gender, the often serves to make the group homogenous over certain traits, despite not having selected for these traits specifically. It can also rate-limit the growth of any one group, given enough seeding of minority or otherwise oppressed groups to let a more diverse pattern form, if seeding is chosen carefully.

Invite trees are a pattern where each user can invite some other users, but is in some way ‘responsible’ for their behavior, which limits the possibility that invites are sold openly, and can in some cases keep out certain surveiling users.

I’m sure there are a great number of patterns I’ve missed, but cataloguing these and calling out the differences may help make us more aware of the tools we have at our disposal in creating social networks.

Why is it so hard to evolve a programming language?

Parsers.

We use weak parsing algorithms – often hand-written left-leaning recursive descent parsers. Sometimes PEGs. Usually with a lexing layer that treats keywords specially, annotating them as a particular part of speech without that being a function of the grammar, but the words themselves.

This makes writing a parser easy, particularly for those hand-written parsers. Keywords are also a major reason we can’t evolve languages: adding new words breaks old programs that were already using them.

The alternative is to push identification of keywords into the grammar, and out of the lexer. This means that part of speech for a word can be determined by where it’s used. This allows some weird language, but it keeps things working well. Imagine javascript letting you have var var =. It’s not ambiguous, since a keyword can’t appear as a variable name, positionally. The first var can’t be known whether it’s a keyword or variable name without some lookahead, though: var = would be a variable name and var foo would be a keyword.

This usually means using better parsers. Hand written parsers could maintain a couple tokens buffered state, allowing an unshift or two to put tokens back when a phrase doesn’t match; generated parsers can do better and use GLR, and a fully dynamic parser working off of the grammar as a data structure can use Earley’s algorithm.

These are problematic for PEGs though. They won’t backtrack and figure out which interpretation is correct. Once a PEG has chosen a part of speech for a word, it sticks. That’s the rationale behind its ordered choice operator: one must have clear precedence. It’s in essence an implicit way to mark which part of speech something is in a grammar.

Backward-incompatible changes

It’s always tempting to get a ‘clean break’ on a language; misfeatures build up as we evolve it. This is the biggest disservice we can do our users: a clean break breaks every program they have ever written. It’s a new language, and you’re starting fresh.

Ways forward

Pragmas. "use strict" being the one Javascript has. They’re ugly, they don’t scale that well, so they have to be kept to a minimum. Version selection form mutually exclusive pragmas. This is what Netscape and Mozilla did to opt in to new features: <script language='javascript1.8'>. The downside here is that versioning is coarse, and doesn’t let you mix and match features. Scoping "use strict" to the function in ES5 was smart, in that it allows us to use the lexical scope as a place where the language changes too.

The complexity with "use strict" is that it changes things more than lexically: Functions declared in strict mode behave differently, and if you’re clever, you can observe this from the outside, as a caller, and that’s a problem for backward compatibility.

Support multiple sub-languages. In a parser that can support combining grammars (Earley’s algorithm and combinator parsers for pure LL languages in particular are good at this, though PEGs are not). If someone elects a different language within a region of the program, this is possible. Language features can be left as orthogonal layers. How one would express that intent is unexplored, though. Too few people use the tools that would allow this.

Versions may really be the best path forward. Modular software can be composed out of multiple files, and with javascript in the browser in particular, we’ll have to devise other methods; transport of unparsed script is already complex.

We should separate the parser from the semantics of a language: Let there be one, two, even ten versions of the syntax available, and push down to a more easily versioned (or not at all) semantic layer. This is where Python fell down without needing to. The old cruft could have been maintained and reformed in terms of the new concepts from Python3.

Confusion in Budapest

Today, in ‘European politics are inscrutable unless you’re there’, I discovered that while Hungary is a European Union member, they don’t use the Euro, but still on the Forint, with a variable exchange rate. There’s resistance to fully adopting the currency here.

I changed money to Euros in Newark since I had time to kill, but that left me in Budapest when I arrived without money that small vendors will accept. Doubly so the bus, who requires exact fare.

I ended up being waved onto the bus, however much a mistake that was, because it was so full. Disaffected drivers are a particular frustration on bus routes for me.

I ended up at the exchange point to get on the Metro (K-P P+R), but without a valid ticket to exchange. I ended up having to wander around to figure out how to get away from the Metro station; it turns out it’s attached to a shopping mall. I withdrew 15.000 Ft., considered trying to get some food at Tesco, and decided against it and went to the train.

Tickets are confusing … I misvalidated my first one, destroying it. After some I-don’t-speak-Hungarian-you-don’t-speak-English with the ticket attendant, she showed me how to validate a ticket and I finally got on the Metro.

Got off downtown, realized I was at the wrong Hilton; of course there are two. I’m at the less convenient but much more beautiful one one the hill across the Danube. At least taxis are affordable here.

I didn’t notice when I booked this hotel that it’s at the top of a small mountain. It’s not a long walk to the conference, but it’s a steep one.

On my way to Budapest

0100 AT

Airports are already whitewashed by cost, but international travel even more so: Probably 90% white people in Newark’s Terminal B that I saw, both passenger and workers alike. It’s disturbing to see such a strong filter, and I wonder what pressures are selecting this way. Is it hiring for dual language, and in a wing where most flights are to Europe, and so select for European languages? Or is there some more insidious bias?

0300 UTC

I’m flying over the atlantic right now, three hours from Vienna. It’s night, by any stretch of the imagination – early in Vienna, rather late in Boston. I spent all day in airports, mostly waiting in Newark since my midmorning flight in didn’t really come that near my early evening flight out.

The trend to denying passengers checked luggage did the usual damage on this flight: Slow boarding, people cramming bags that did not fit into overhead compartments. I wonder if European flights have different size carryon; mine was one half inch too big to fit in most of the bins, and so had to be put in sideways, taking more space than neccesary. This feels like a classic case of engineers rounding measurements to a convenient number in their native unit, saying ‘close enough’. 33 cm totally equals 12 inches, right? Close enough.

I’m sitting with two delightful women, on their way to Iran; at least one’s a journalist, and I’ve not asked for more detail from the other. They’re kind and fun, and shared chocolate with me. I hope they make their connection – they’ve ten minutes between, and a whole day’s wait if they miss it. We bonded over the difficulty of loading bags in the bins; theirs misfit the same way mine did, and even my height and ability to use brute force to close things didn’t do the job.

I can’t sleep, since the flight’s a little bumpy, and the staff keep nudging my elbow in the narrow aisles. A 777-200 is a lot nicer than most planes I’ve been on recently, but it’s still arranged for sheer number than it is for comfort.

I watched two films.

Since Bailey couldn’t come with me, I got to see ‘Guardians of the Galaxy’, which was fun, but I’m starting to get frustrated with the arbitrary plots of so many movies. I feel like in the past, directors at least tried to satisfy those of us whose sense of disbelief, while suspended, still works. Lately they do not. Every device satisfies plot, not internal consistency, and every failure is arbitrary to the same purpose. It’s the kind of lazy storytelling that leads to killing characters for emotional impact, rather than driving situations where tough choices have to be made. The first bad guy was black, and the green woman had echoes of Star Trek’s Orion slave girls. The criticism of her character development are true, she’s almost all prop to the men in the film, even if she does kick ass initially. There’s even the clicheéd rescue scene toward the end.

Second came ‘Lucy’. It’s as if ‘Kill Bill’ merged with ‘2001: A Space Oddyssey’ and ‘I, Robot’. Bad science, but at least somewhat internally consistent. Bizzarely philosophical even while there’s wonton killing on screen. I can add it to my long, long list of movies that make me say ‘huh’ at the end. I do wish there was a convention for ‘humans have potiential’ other than ‘humans only use 10% of their brain’. It’s so trite at this point that it makes me angry.

0500 UTC

Now we’re over the English Channel, heading across France. my brain is trying to comprehend the path we’re taking given the Mercator projection of the map they display it on and the 11,000 m altitude. One part of me wants to round that down to ‘well, just barely off the surface’, and the rest thinks it’s unfathomably high, and backed up by the -50 °C outside the aircraft.

At that altitude, pressure should be low enough that sustaining large mammal life is almost impossible without hibernation level metabolic change. The temperature, too, would kill within minutes. I wonder what the air pressure is inside the cabin. I’ve never found a good physical indicator using my body, but my ears have popped continuously for the last six hours.

0600 CET

I can barely see the sunrise, I think over the Rhein plain, from my seat since I’ve an aisle. It’s pretty, a dull orange and deep blue, separated by an even deeper blue layer of clouds, slowly lightening.

I’m not sure whether jetlag will hit me or not – I’ve been up for 20 hours or so, but feel like it’s morning. I hope that this bodes well. We’ll see if I make it to tonight. I think there’s a speaker’s dinner, or some other gathering leading up to the conference tomorrow. I should check, but there’s no internet connection in-flight, and we’ll see what happens on that front when I get to Budapest. Maybe I can nap on the last flight and arrive truly refreshed. We’ll see if I get a window to lean against. Chances are aisle or middle though. If a Dash 8 has a middle.

0730 CET

I’m most worried now about whether I can get a SIM card and enough data service to be useful while I’m at the conference. I suspect it’ll be fine, but likely a little annoying.

Nodevember 2014 - Sunday

@bejonbee talking about React.

He works for an interesting group of people – not the usual consultancy, but a wealth-management and self-sufficiency group, doing education. Interesting premise.

Mostly a 101 to react, but nice to see someone more familiar tie some things together.

The implications of the DOM-diffing behavior are really interesting, in that modifications made outside of React are preserved, not overwritten – React really does play nice with others.

JSX is interestingly implemented; solid work by people who really understand parsers, but they’re somewhat simplistic about the lexing, so that class is a reserved word, meaning HTML class= had to be renamed className=.

@funkatron is giving a talk on “Open Sourcing Mental Illness”.

His talk’s been recorded 14 times(!) and he has copies up at funkatron.com.

Comparing eye disease – needing corrective lenses – to mental illness. Awesome! “How many people here have just been told you need to ‘squint harder’” .. no hands.

“how many of you would feel comfortable talking about a coworker you knew pretty well about having cancer?” Most hands.

“How many would feel comfortable with talking about your mental health?” Maybe 1 in 5.

Moderate depression`has a similar level of disability to MS or severe asthma.

Severe depression has a similar level of disability to quadrapeligia.

“You are so brave and quiet; I forgot that you were suffering” –Ernest Hemmingway

Watching @derickbailey‘s talk on development environments and getting out of IDEs, looking for advice to give to developers at PayPal.

I just realized that Grunt looks a lot more amazing if you’re coming from a heavy IDE with lots of features but no flexibility. It’s amazing what perspective looks like!

And now to @hunterloftis “We are all game developers”

He built a game for a major music artist in the UK in three weeks, using software rendering. Great art and integrating the music.

Now 1.7 billion WebGL devices were shipped last year. It’s available on IOS 8!

“We avoided a lot of work by avoiding state” – since most rendering is built with pure functions from a tiny, immutable state, lots of things like animation speed came out naturally. Then add websockets and the state from one user’s view controls a renderer on the server. Super clever.

requestAnimationFrame can drop frames, so time has to be calculated and perhaps multiple simulation ticks (to assign position and state) to keep time constant and not dependent on the computer’s speed. He points out that this affects testability, and rendering and simulation have to be decoupled.

Simulate faster than rendering: otherwise, tearing and sync problems.

Made the audience laugh with an awesome bug, trying to simulate rigid body physics, a simple box, which in trying to make it behave right flaps around the screen like a bird, squishing and flopping before it pops into final shape. The take=away though is that if physics is written deterministically and not depending on the render loop, the bug is repeatable – and it’s possible to know the bug was fixed since the simulation is deterministic.

And techniques for controlling state and using immutable objects apply greatly to DOM rendering and apps too. React uses determinism to great effect.

talks I missed

I’m bummed that I’m missing @thlorenz‘ talk on heap dumps for mere mortals, but I’m making a note to have good conversations with him after the fact. (He’s already got his slides up!)

I heard that @cdlorenz’ “Runtime Funtimes”

@nicholaswyoung‘s (Original Machine) talk on microservices.

“We learn more from things going wrong than things going right”

Divorced the CMS domain from the podcast feed domain, and separated out the web player software.

“When we release a new show, we get 10,000 requests to download the show in the first few seconds. Ruby wasn’t the utopia we thought it was.”

“I build a monolith because that’s what Rails conditions you to do. The admin dashboard, the analytics, the feed generation all in one app. You put your controllers in your controllers directory and your views in the views directory, and you end up with a monolith.”

“Our media services would hold requests and it would take users 4 seconds to load our application”.

“I didn’t initially know how to made node work. First thing I did was write a monolith. I thought the runtime would save me. I’m smart

Seeing the core broken up into such isolated modules, you get the idea that breaking things up is a good idea.

“It’s an application. I guess you could call that a service

Nodevember 2014 - Saturday

Nodevember 2014 kicked off this morning in Nashville after a super fun Nodebots meetup last night.

@elizabrock‘s keynote was a fantastic review of where we came from and who we are as an industry: Not just computer scientists, but computer scientists; not mathematicians, but often doing mathematics. Not just engineers but doing engineering. Not all artists, but doing art.

We’re training the fourth generation of programmers now: The women who programmed the early computers during World War II could be our children’s great grandparents.

I missed @jeffbski’s talk on hardening Node.JS for the Enterprise, but the slides look great and I heard great stuff. Also fighter jets, befitting a talk from someone who was USAF!

Good talk from @ifandelse on ES6. The future is now, for sure. Coming up fast.

@mrspeaker gave a great, fun, funny, reference-filled talk on Gonzo Game Development. Lots of great quotes, and talk about the line between engineering and art.

@katiek2 gave a great intro to Nodebots, and what’s needed to run a good meetup. It’s tempting to do one in Boston / Somerville. Totally @rwaldron‘s turf, and would probably be awesome.

My own talk went well, though my voice gave out part way through and I ran fast. I wasn’t planning on questions, but I guess if you engage your audience, they’ll ask ‘em anyway.

A good intro to couchdb from @commadelimited

And now, “Make art, not apps” by @thisisjohnbrown – simple algorithms! Relating touch to display. Looking up file formats so you can aim glitchiness at interesting places when you corrupt data. Simple trials of “let’s see where the code takes us” become best-of-show art pieces.

He made a “plinko” board (like pachinko), and wired it up to a projector and board, and used it to trigger particle animations. The demo gets “Aaaaahs” from the crowd. Super simple effect but totally wow.

He showed off Iannis Xenakis’ music from the 1950s generated from gas molecule interactions, Frieder Nake’s art created with markov chains. People have been doing wonderful art with code and algorithms for a long time!

And ending with a demo of the Neurosky device and Neuronal-Synchorony library together reading brainwaves and generating output, both audio and visual – imagine that being combined into a multi-person dance party!

Homework: Do your own art. Lots of options!

  • Uncontext: Structured data source without the context or rules for how they’re generated, but a source of data to do art with.
  • p5.js

Walter Benjamin’s 1936 essay on mechanical reproduction and art and what has done to art, if you change ‘mechanical’ to ‘digital’ and you have a manifesto for creative coding.

Time for a party!

Not a moment too soon

It wasn’t a moment too soon – and in fact a few too late – that I moved my site from Wordpress to Hexo. The other two dozen – not just my friend’s blogs – Wordpress sites on the server – with versions from 3.8 to 4.1 – were broken into and scripts created that would send mail. Some interesting features of the hack though!

  • They installed PHP with innocuous-sounding files like gallery.php inside of plugins and themes for Wordpress.
  • They installed a .so file, loaded it into the /usr/bin/host program with a dynamic loader trick, then deleted the .so so it’d be hard to find. This created a daemon used to send junk mail, and quite efficiently too.
  • Having PHP record what URL was posted to when sending mail is the best thing ever for tracking this down.
  • lsof is great for verifying that things are shut down.
  • They wrote to every directory that they had privilege to that was web accessible. Very adept hack.

Ugh.

Blog Migration

I just moved my blog over from a Wordpress installation to Hexo in a fit of frustration after five friends blogs were broken into and used to spam via a hole, apparently, in Wordpress’ Jetpack. The danger of leaving complex software unpatched for more than a day is becoming impossible, and I don’t use most of the features of Wordpress anyway, given that I’ve increasingly had an allergy to comments and most other more dynamic features, and I author in Markdown anyway. Being able to do this tidily in vim makes me happier than editing in a web browser anyway.

I chose Hexo because it had a working migrator to import a dump from Wordpress; no other reason, really, but its design works well enough (even if it is slow to generate the static files given my nearly 1500 posts). URLs were preserved with little hackery, too, so I didn’t break the web in the process.

I still want something better: I’d be happy without pagination to avoid rebuilding a 1500-entry latest-first archive every time I add a post; style files don’t seem to get updated properly (that is probably a more trivial bug that I could fix), and something that’s more directly in tune with the dependencies between the source files and the generated pages would be delightful. Maybe I need to make something with Broccoli or even just make(1) or tup.

Telcopunk

So we’ve had steampunk and dieselpunk, cyberpunk and seapunk.

My I’m going to call my aesthetic ‘Telcopunk’.

I favor practicality.

I believe in universal service and universal access.

Utilitarianism rules.

Research is important.

Unions are good.

Work locally. Think globally.

Distance is expensive.

Connecting people is important.

Information is and should be a primary concern of industry.

Designs should be made for durability.

An important job is building and maintaining infrastructure.

Privacy – but not security – is a core value, and standards of conduct reflect this.

Jeans. Work boots. Gloves.

Conceive things, then make them.

'How do I get good at programming?', I'm asked.

Read. Write. Publish. Repeat.

And in general, people’s opinions are meaningless without data to back them up. So ignore the haters.

Ignore the people saying you’re doing it wrong unless your job depends on it or they have good reasons.

People will tell you “javascript will die” or “ruby is just a fad”

Ignore the haters.

But also ignore the haters who say “java is stupid.”

And ignore the haters who say “OO is wrong”

And ignore the ones who say “OO is the only way” Or “OO is the best way” too.

But listen to the people who say “have you considered a different approach?”. Those are the good ones.

Strong suggestions for structurally combatting online harassment

Craig Newmark asked for suggestions and here’s some things I came up with:

  • Create block functions that actually work and completely block all interaction with a user.
  • Create a mute function that doesn’t get tangled in block.
  • Respond to abuse reports, generating at minimum an inter-user block, but that when they actually involve any kind of escalation by the abuser, a block of that user from the service (or other highly quarantining action).
  • Encourage use of pseudonyms rather than complete anonymity, if only to encourage a stable handle to block by.
  • Spam-fighting-like statistical models to detect outlier behavior – repeated first contacts by someone who’s been reported as harassing is one particularly significant sign. Being proactive and confirming with the harassed user might even make sense. “Is @username bothering you?”
  • Allow communities to segment when possible, rather than encouraging all users to share one single graph.
  • At least three-level privacy controls per account: Public, initial contacts restricted to friends, and all contact restricted to friends.
  • Create transparent policies and processes, so we can know how effective the service will be in supporting us if harassed, rather than shouting into the void, wondering if anyone actually reads these reports. If the policies or processes change, say something!
  • Do use decoy selections in report abuse forms, but keep it simple: “This is annoying” vs “this is dangerous” can be differentiated, and the decisions about how to handle those should be different.
  • Don’t patronize the people you’re trying to protect. Leave choices in the hands of those needing protection when it’s possible. For tools for protection that have downsides (social cost, monetary cost, opportunity cost), let those needing protection opt in or opt out. If the tools are independent of each other, let them be chosen à la carte.

And a rule of thumb:

If you spend less time fighting harassment than you do fighting spam, your priorities are wrong. If you take spam seriously and don’t take harassment seriously, you’re making it worse.