Tag Archives: Web Development

Webdev Tip: ALWAYS Put the Record ID *in* the Edit Form

I started out this morning happy with my web host. They’d sent me an alert about disk usage that allowed me to catch an error that would have filled up all the available space on my VPS and taken down this site and several others, and I was able to fix it before that happened. That changed as I discovered what had actually set it up, as revealed by another, more pressing issue.

Background

A few months ago, my VPS did lock up, because I’d set up backups on a new site and forgot to add a cleanup script. Tech support brought the server back online, I cleared out the backups, and I copied the script over from this site.

But this site’s cleanup script stopped running, and it reached 90% usage. The script was there, but the cron job was somehow pointing to the other script. I figured I must have messed it up at the time, made sure it was correct now, and moved on.

Discovery

Later this afternoon, though, I discovered that a test blog I set up last week was pointing to the wrong site. That seemed really weird. I looked in the control panel and it was very neatly pointing to the other site’s folder. How does that happen?

Then I remembered: A few days ago, I’d reconfigured several sites to upgrade them to PHP 7.2. I’d opened them in multiple tabs to I could get them all at once. And I had this sinking feeling.

Sure enough: DreamHost’s control panel doesn’t put the form state in the page. As far as I can tell, the ID of the record you’re editing is stored in the session somewhere, which is fine if you only ever have one page open at a time, but if you open two pages, it gets confused.

That’s what probably happened a few days ago: I opened two forms, saved them both, and the settings for one site got written to the other. And it’s probably what happened a few months back with the cron jobs: I opened one to edit, the other for reference, and it overwrote the wrong one.

As near as I can tell it’s just the one site that got messed up, which is a relief. Even better that it’s the test site and not, say, this one. But I’m still waiting for the fixed config change to take effect.

Lesson

Always, always put the record ID for an edit form in the form. People will open multiple records in different tabs or windows, to compare them or just to speed up their workflow.

If you store it in session, or in a cookie, or anywhere else, you run a good chance of saving the data into the wrong record.

Redirecting HTTPS with Let’s Encrypt and Apache

The free TLS certificate provider Let’s Encrypt automates the request-and-setup process using the ACME protocol to verify domain ownership. Software on your server creates a file in a known location, based on your request. The certificate authority checks that location, and if it finds a match to your request, it will grant the certificate. (You can also validate it using a DNS record, but not all implementations provide that. DreamHost, for instance, only uses the file-on-your-server method.)

That makes it really simple for a site that you want to run over HTTPS.

Redirected sites are trickier. If you redirect all traffic from Site A to Site B, Let’s Encrypt won’t find A’s keys on B, so it won’t issue (or renew!) the cert. You need to make an exception for that path.

On the Let’s Encrypt forums, jmorahan suggests this for Apache:


RedirectMatch 301 ^(?!/\.well-known/acme-challenge/).* https://example.com$0

That didn’t quite work for me since I wanted a bit more customization. So I used mod_rewrite instead. My rules are a little more complicated (see below), but the relevant part boils down to this:


RewriteEngine On
RewriteBase /

# Redirect all hits except for Let's Encrypt's ACME Challenge verification to example.com
RewriteCond %{REQUEST_URI} !^.well-known/acme-challenge
RewriteRule ^(.*) https://example.com/$1 [R=301,L]

These rules can go in your server config file if you run your own server, or the .htaccess for the domain if you don’t.

Continue reading

Adding the S in HTTPS

I finally moved the public side of this blog over to HTTPS last weekend. Traditionally I’ve preferred to put public info on HTTP and save HTTPS for things that need it – passwords, payment info, login tokens, anything that should be kept private — but between the movement to protect more and more of the web from eavesdropping and the fact that tools are making it harder to split content between open and encrypted sides (the WordPress app sometimes gets confused when you run the admin over HTTPS but keep the public blog on HTTP), I decided it was time.

The last sticking point was putting HTTPS on my CDN, and I’d decided to try getting Let’s Encrypt and CloudFront working together over the weekend. Then Amazon announced their Certificate Manager for AWS, which took care of the hard part. All I had to do was request and approve the (domain-validated) certificate, then attach it. Done!

Downside: Because I opted for the SNI option on the CDN, rather than pay the premium to get unique IP addresses on every CloudFront endpoint, the images won’t work with older browsers like IE6. (Server Name Indication is a way to put more than one HTTPS site on the same IP address.)

On the other hand, the cert I have on the site itself is SHA2-signed (as it should be, now that SHA-1 is no longer sufficient), so it wouldn’t work with older browsers even if I turned off the CDN and kept the images on the server.

It’s the first time I’ve actually broken the ability of older browsers to see any of my personal sites. I’ve broken layouts, sure, but not completely cut them off. In general I’d rather not, but I think I’m OK with it this time because

  1. SHA1 really does have to go, SHA2 is well-established, and it’s not like I’m providing downloads of modern browsers or a critical communications forum for people who are stuck with ancient hardware/software because that’s all that’s available to them.
  2. SNI has been around for TEN YEARS.

And as it turns out, DreamHost’s ModSecurity rules block IE6 to begin with, so the whole site’s already broken in that browser.

So I guess next time I redesign I can finally drop any IE6 workarounds. :shrug:

A lot of web developers have forgotten the lessons of IE6

A lot of web developers have forgotten the lessons of IE6, and just as they used to build desktop websites coded only for one engine, now they’re coding mobile sites specifically for Webkit, even when other browsers would be perfectly capable of rendering the designs they want.

This is exactly the sort of thing that gave IE6 such a stranglehold on the web for so many years (and as much as we’d like it to be, it’s not dead yet), with Netscape/Mozilla and Opera completely marginalized until Firefox managed to break through. It’s not quite so bad because two companies are driving WebKit (Apple & Google) rather than just one (Microsoft), but let’s try to learn from history this time around instead of repeating it.

Call for action on Vendor Prefixes – The Web Standards Project

The Web Standards Project is a grassroots coalition fighting for standards which ensure simple, affordable access to web technologies for all.

Originally posted on Google+

On broken HTML

From time to time the idea is put forth that Opera (and Firefox, for that matter) needs to start dealing with bad code. There are two problems with that view:

  1. Opera already deals with quite a bit of “bad code” (but there’s always room for improvement)
  2. Just dealing with bad code isn’t enough: you have to deal with it the same way someone else does.

#2 is the tough part.

The rules for dealing with good code are, for the most part, specific. If you encounter well-formed HTML, you can be reasonably sure you know what the author meant. But there are very few rules for dealing with bad code. Trying to "deal with it" means trying to guess what the author meant, and sometimes different assumptions are equally as likely.

Example:


<p><b>Here's some text</i> and here's some more.</p>

Did the author close the non-existent italics by mistake, meaning to close the bold? Or did he open bold by mistake, intending to open italics? Or is the closing italics tag left over from copy-and-paste? Depending on what assumptions the browser makes, it should display it as:

Here’s some text and here’s some more.

Here’s some text and here’s some more.

Here’s some text and here’s some more.

And that’s just a simple example. It gets wilder when you throw in issues like inline vs. block elements. A paragraph should never appear inside a tag for text formatting, like bold or italic. By all rights, starting a new paragraph (or more precisely, ending the previous one) should also revert to plain formatting. But a lot of old pages expect the formatting to continue into the next paragraph, because way back when, a P tag was a double-line break, not a container.

Now, suppose that Browser A always makes the first assumption, and Browser B always makes the second. If someone tests their code in Browser A, and it happens to be what they want it to do, they won’t necessarily notice that their code is broken. The result: the site looks wrong in Browser B, and the page author — who thinks the page is fine, since he tested it in Browser A — blames Browser B.

Multiply that scenario by millions of pages and you have a large chunk of the web as we know it today.

So the solution isn’t just to “handle bad code.” It’s to handle that bad code in the same way that the dominant browser handles it. And since there’s no document you can look to for guidance, that means taking every possible chunk of bad code, running it through the other browser, and seeing what it does to it.

And there are a lot of ways to break code!

Even Microsoft did this back when IE was new. At the time, lots of people were writing broken code and testing it by seeing whether it looked right in Netscape. So IE had to make the same assumptions Netscape did on certain things. Once IE became established, they diverged.

Some relevant articles:

This post originally appeared on Confessions of a Web Developer, my blog at the My Opera community.