As a SysAdmin, people ask me how much they need to worry over the heartbleed vulnerability. Here’s my own take:
Google were known to be vulnerable. They co-discovered the vulnerability and deployed fixes quickly. I like to believe they are analyzing the scope and likelihood of user password compromise and will issue good advice on whether Gmail passwords should be updated.
For everything else, my small opinion is “don’t panic.” Not every web site would have been affected. The Ops folks at each site need to patch their systems and assess the extent to which credentials may have been compromised, then take appropriate steps to mitigate compromised data, which might include asking users to set new passwords. But if they’re still waiting on some patches, then submitting a new password could actually put both passwords at risk.
For other important passwords, like your bank, check up on what they’re recommending that you do. If a site is important to you and they offer two-factor auth, go for it: that typically means that if you log on from a new computer they’ll text a one-time pin code to your mobile phone to double-check that it’s you.
I want to launch a service which has its own complex start/stop script at boot, and I want to launch it as a non-login user. So, I dig into upstart. The cookbook … is not a cookbook. So, here’s is my little recipe:
All this does is, run /opt/openfire/bin/openfire start or /opt/openfire/bin/openfire stop at the appropriate time. Allegedly, this is suboptimal, but it works for me.
Here is a new phishing attack that made it through to Gmail about the domain name dispute around tjldme . . . ?!!
Dear Manager,
(If you are not the person who is in charge of this, please forward this to your CEO,Thanks)
We are a organization specializing in network consulting and registration in China. Here we have something to confirm with you. We just received an application sent from “Global Importing Co., Ltd” on 20/11/2013, requesting for applying the “tjldme” as the Internet Brand and the following domain names for their business running in China region:
Though our preliminary review and verification, we found that this name is currently being used by your company and is applied as your domain name. In order to avoid any potential risks in terms of domain name dispute and impact on your market businesses in China and Asia in future, we need to confirm with you whether “Global Importing Co., Ltd” is your own subsidiary or partner, whether the registration of the listed domains would bring any impact on you. If no impact on you, we will go on with the registration at once. If you have no relationship with “Global Importing Co., Ltd” and the registration would bring some impact on you, Please contact us immediately within 10 working days, otherwise, you will be deemed as waived by default. We will unconditionally finish the registration for “Global Importing Co., Ltd”
Please contact us in time in order that we can handle this issue better.
Best Regards,
Wesley Hu
Auditing Department.
Registration Department Manager
4/F,No.9 XingHui West Street,
JinNiu ChenDu, China
Office:Â +86 2887662861
Fax:Â +86 2887783286
Web:Â http://www.cnnetpro.com
Please consider the environment before you print this e-mail.
I assume they’ll need a processing fee. I wonder if they munged toldme.com in an effort to avoid Phish filtering . . . ? The URL at the bottom is blocked by our firewall.
At long last, I retired my old T-Mobile G2. It was the last in a long line of phones I have owned for the past decade with a physical keyboard. (I think I owned every Sidekick up to the 3 before going Android with the G1 and the G2.) I like the ability to thumb type into my phone, but the G2’s old keyboard had long ago gone creaky, and it had lacked a dedicated number row besides.
Obligatory picture recently taken with my new computer telephone. Featuring a cat.
They don’t make nice smart phones with keyboards any more. Market research seems to indicate that the only remaining markets for keyboard phones are horny teenagers who need a cheap, hip Android-based Sidekick, and those legions of high powered business people who will never abandon their ancient Blackberries.
Anyway, the new Nexus 5 is here. The on-screen keyboard is okay slow and inaccurate. Like moving from a really fantastic sports car to a hovercraft piloted by a drunken monkey. I mean,the monkey-piloted hovercraft is undeniably cool technology, and I can eventually get where I need to go, but . . . its not the same, you see?
So, lets explore Voice dictation! It works . . . well, about as well as the monkey hovercraft, but with the added benefit that you don’t have to keep jiggling your thumb across the screen. But how do you do new lines and paragraphs? Where’s the command reference?
The other thing that excited me about the Nexus 5 was that on the home screen you can drag apps right up to “Uninstall” . . . unless they’re Google apps! “Way to not be evil,” I cried. Until a Google colleague pointed out that it was just a bit of UI funkiness on Google’s part, owing to the applications coming bolted into the UI, there is at least a method to disable them.
Anyway, this is useful knowledge that helped me to vanquish the Picasa sync thing that has been hiding images from the gallery for the past few years. I have another project where I’m testing out BitTorrent Sync to pull images off our phones and then sync a copy of the family photo archive back down to the phones. If that works out, I’ll write it up. I may pursue that further to see if I can’t replace Dropbox, which, unfortunately, does not (yet) offer any sort of a family plan. Also, if I can host my own data I needn’t share as much of it with the NSA.
Two weeks ago, I attended Atlassian Summit 2013 in San Francisco. Â This is an opportunity to train, network, and absorb propaganda about Atlassian products (JIRA, Greenhopper, Confluence, &c.) and ecosystem partners. Â I thought I would share a summary of some of the notes I took along the way, for anyone who might find interest:
At the Keynote, Atlassian launched some interesting products:
As time passes, the ticket gets crankier at you in real time about the SLA.
Jira Service Desk
Jira Service Desk is an extension to JIRA 6 oriented around IT needs.  The interesting features include:
Customer Portal with integrated KB search
Real-time visibility of ticket SLA status
The first thing helps people get their work done, and the second is manager catnip.
Confluence Knowledge Base
Confluence 5.3 features a shake-the-box Knowledge Base setup:
Improved template system — “blueprints” for different article types
Real-time search portal which integrates with JIRA Service Desk
My Questions: enforcing KB link with JIRA workflow and identifying “use count” as an article search metric
Other Stuff I looked into:
REST and Webhooks
There was a presentation on JIRA’s REST API, and mention of Webhooks.
Another feature for tight integration is Webhooks: you can configure JIRA so that certain issue actions trigger a hit to a remote URL. Â This is generally intended for building apps around JIRA. Â We might use this to implement Nagios ACKs.
Atlassian Connect
I haven’t looked too deeply as this is a JIRA 6 feature, but Atlassian Connect promises to be a new method of building JIRA extensions that is lighter-weight than their traditional plugin method.  (Plugins want you to set up Eclipse and build a Java Dev environment in your workstation… Connect sounds like just build something in your own technology stack around REST and Webhooks)
Cultivating Content: Designing Wiki Solutions that Scale
Rebecca Glassman, a tech writer at Opower, gave a really engaging talk that addresses a problem that seems commonplace: how to tame the wiki jungle! Â Her methodology went something like this:
Manage the wiki like it is a product: interview stakeholders, get some metrics, do UX testing
Metrics: Google Analytics, View Tracker Macro, Usage Macro
UX results at Opower revealed more reliance on Table of Contents vs Search (55%) and that users skip past top-level pages, so you don’t want to put content just on there
In search, users only look at the first 2-3 results before giving up
They engaged some users to track the questions they had and their success at getting answers from the wiki
The Docs people (2) built an “answer desk” situation where they took in Questions from across the company, and tracked their progress writing answers on a Kanban board
As they better learned user needs and what sort of knowledge there was, they built “The BOOK” (Body of Opower Knowledge) based on a National Parks model:
Most of the wiki is a vast wilderness, which you are free to explore
The BOOK is the nice, clean visitors center to help take care of most of your needs and help you prepare for your trek into the wilderness
The BOOK is a handbook, in its own space, with its own look-and-feel, and edits are welcome, but they are vetted by the Docs team via Ad Hoc Workflows
By having tracked Metrics from the get-go, they can quantify the utility of The BOOK …
(I have some more notes on how they built, launched, and promoted The BOOK. Â The problem they tackled sounds all to familiar and her approach is what I have always imagined as the sort of way to go.)
Ad Hoc Canvas
The Ad Hoc Canvas plugin for Confluence caught my eye.  At first glance, it is like Trello, or Kanban, where you fill out little cards and drag them around to track things.  But it has options to organize the information in different ways depending on the task at hand: wherever you are using a spreadsheet to track knowledge or work, Ad Hoc Canvas might be a much better solution.  Just look at the videos and you get an idea . . .
The Dark Art of Performance Tuning
Adaptavist gave a presentation on performance analysis of JIRA and Confluence. Â It was fairly high-level but the gist of this is that you want to monitor and trend the state of the JVM: memory, heap, garbage collection, filehandles, database connections, &c. Â He had some cool graphs of stuff like garbage collection events versus latency that had helped them to analyze issues for clients. Â One consideration is that each plugin and each code revision to a plugin brings a bunch of new code into the pool with its own potential for issues. Â Ideally, you can set up a load testing environment for your staging system. Â Short of that, the more system metrics that you can track, you can upgrade plugins one at a time and watch for any effects. Â As an example, one plugin upgrade went from reserving 30 database connections to reserving 150 database connections, and that messed up performance because the rest of the system would become starved of available database connections. Â (So, they figured that out and increased that resource..)
tl;dr: JIRA Performance Tuning is a variation of managing other JVM Applications
Collaboration For Executives
I popped in on this session near the end, but the takeaway for anyone who wants to deliver effective presentations to upper management are:
The presenter’s narrative was driven by an initial need to capture executive buy-in that their JIRA system was critical to business function and needed adequate resourcing.
As part of a project at work I’ve built some Jython code that builds iCalendar attachments to include meeting invitations for scheduled maintenance sessions. Jython is Python-in-Java which takes some getting used to but is damned handy when you’re working with JIRA. I will share a few anecdotes:
1) For doing date and time calculations, specifically to determine locale offset from UTC, you’re a lot happier calling the Java SimpleDateFormat stuff than you are dealing with Python. Python is a beautiful language but I burned a lot of time in an earlier version of this code figuring out how to convert between different time objects for manipulation and whatnot. This is not what you would expect from an intuitive, weakly-typed language, and it is interesting to find that the more obtuse, strongly-typed language handles time zones and it just fricking works.
from java.text import SimpleDateFormat
from com.atlassian.jira.timezone import TimeZoneManagerImpl
tzm = TimeZoneManagerImpl(ComponentManager.getInstance().getJiraAuthenticationContext(),
ComponentManager.getInstance().getUserPreferencesManager(),
ComponentManager.getInstance().getApplicationProperties())
# df_utc = DateFormat UTC
# df_assignee = DateFormat Assignee
df_utc = SimpleDateFormat("EEE yyyy-MM-dd HH:mm ZZZZZ (zzz)")
df_assignee = SimpleDateFormat("EEE yyyy-MM-dd HH:mm ZZZZZ (zzz)")
tz = df_utc.getTimeZone()
df_utc.setTimeZone(tz.getTimeZone("UTC"))
df_assignee.setTimeZone(tzm.getTimeZoneforUser(assignee))
issue_dict['Start_Time_text'] = df_utc.format(start_time.getTime())
issue_dict['Start_Time_html'] = df_utc.format(start_time.getTime())
if df_utc != df_assignee:
issue_dict['Start_Time_text'] += "\r\n "
issue_dict['Start_Time_text'] += df_assignee.format(start_time.getTime())
issue_dict['Start_Time_html'] += "<br />"
issue_dict['Start_Time_html'] += df_assignee.format(start_time.getTime())
# Get TimeZone of Assignee
# Start Time in Assignee TimeZone
Since our team is global I set up our announcement emails to render the time in UTC, and, if it is different, in the time zone of the person leading the change. For example:
We have a team in London. I have not yet tested it but as I understand it, once they leave BST, their timezone is UTC. I am looking forward to seeing if this understanding is correct.
As I understand it, I’m pulling the current time zone of the user, which changes when we enter and leave DST, which means that the local time will be dodgy when we send an announcement before the cutover for a time after the cut-over.
2) I was sending meeting invitations with the host set to the assignee of the maintenance event. This seemed reasonable to me, but when Mac Outlook saw that the host was set, it would not offer to add the event to the host’s calendar. After all, all meeting invitations come from Microsoft Outlook, right?! If I am the host it must already be on my calendar!!
I tried just not setting the host. This worked fine except now people would RSVP to the event and they would get an error stuck in their outboxes.
So . . . set the host to a bogus email address? My boss was like “just change the code to send two different invitations” which sounds easy enough for him but I know how creaky and fun to debug is my code. I came upon a better solution: I set the host address to user+calendar@domain.com. This way, Outlook is naive enough to believe the email address doesn’t match, but all our software which handles mail delivery knows the old ways of address extension . . . I can send one invitation, and have that much less messy code to maintain.
from icalendar import Calendar, Event, UTC, vText, vCalAddress
# [ . . . ]
event = Event()# [ . . . ]# THIS trick allows organizer to add event without breaking RSVP# decline functionality. (Outlook and its users suck.)
organizer_a = assignee.getEmailAddress().split('@')
organizer = vCalAddress('MAILTO:' + organizer_a[0]+ '+calendar@' +
organizer_a[1])
organizer.params['CN']= vText(assignee.getDisplayName() + ' (' + assignee.getName() + ')')
event['organizer']= organizer
from icalendar import Calendar, Event, UTC, vText, vCalAddress
# [ . . . ]
event = Event()
# [ . . . ]
# THIS trick allows organizer to add event without breaking RSVP
# decline functionality. (Outlook and its users suck.)
organizer_a = assignee.getEmailAddress().split('@')
organizer = vCalAddress('MAILTO:' + organizer_a[0]+ '+calendar@' +
organizer_a[1])
organizer.params['CN'] = vText(assignee.getDisplayName() + ' (' + assignee.getName() + ')')
event['organizer'] = organizer
You can get an idea of what fun it is to build iCalendar invitations, yes? The thing with the parentheses concatenation on the CN line is to follow our organization’s convention of rendering email addresses as “user@organization.com (Full Name)”.
3) Okay, third anecdote. You see in my first code fragment that I’m building up text objects for HTML and plaintext. I feed them into templates and craft a beautiful mime multipart/alternative with HTML and nicely-formatted plaintext . . . however, if there’s a Calendar invite also attached then Microsoft Exchange blows all that away, mangles the HTML to RTF and back again to HTML, and then renders its own text version of the RTF. My effort to make a pretty text email for the users gets chewed up and spat out, and my HTML gets mangled up, too. (And, yes, I work with SysAdmins so some users actually do look at the plain text . . .) I hate you, Microsoft Exchange!
I’m building out a simple template system for our email notifications, so of course I want to support multipart, text and email. But, hey, we have some text fields in JIRA that can take wiki markup, and JIRA will format that on display. So, how do I handle those fields in my text and HTML message attachments?
So, some sample code to render the custom field “Change Summary” into a pair of strings, change_summary_text and change_summary_html, suitable for inclusion into an email message:
from com.atlassian.event.apiimport EventPublisher
from com.atlassian.jiraimport ComponentManager
from com.atlassian.jira.componentimport ComponentAccessor
from com.atlassian.jira.issueimport CustomFieldManager
from com.atlassian.jira.issue.fieldsimport CustomField
from com.atlassian.jira.issue.fields.renderer.wikiimport AtlassianWikiRenderer
from com.atlassian.jira.util.velocityimport VelocityRequestContextFactory
# Get Custom Field
cfm = ComponentManager.getInstance().getCustomFieldManager()
change_summary = issue.getCustomFieldValue(cfm.getCustomFieldObjectByName("Change Summary"))# Set up Wiki renderer
eventPublisher = ComponentAccessor.getOSGiComponentInstanceOfType(EventPublisher)
velocityRequestContextFactory = ComponentAccessor.getOSGiComponentInstanceOfType(VelocityRequestContextFactory)
wikiRenderer = AtlassianWikiRenderer(eventPublisher, velocityRequestContextFactory)# Render Custom Field
change_summary_html = wikiRenderer.render(change_summary,None)
change_summary_text = wikiRenderer.renderAsText(change_summary,None)
from com.atlassian.event.api import EventPublisher
from com.atlassian.jira import ComponentManager
from com.atlassian.jira.component import ComponentAccessor
from com.atlassian.jira.issue import CustomFieldManager
from com.atlassian.jira.issue.fields import CustomField
from com.atlassian.jira.issue.fields.renderer.wiki import AtlassianWikiRenderer
from com.atlassian.jira.util.velocity import VelocityRequestContextFactory
# Get Custom Field
cfm = ComponentManager.getInstance().getCustomFieldManager()
change_summary = issue.getCustomFieldValue(cfm.getCustomFieldObjectByName("Change Summary"))
# Set up Wiki renderer
eventPublisher = ComponentAccessor.getOSGiComponentInstanceOfType(EventPublisher)
velocityRequestContextFactory = ComponentAccessor.getOSGiComponentInstanceOfType(VelocityRequestContextFactory)
wikiRenderer = AtlassianWikiRenderer(eventPublisher, velocityRequestContextFactory)
# Render Custom Field
change_summary_html = wikiRenderer.render(change_summary, None)
change_summary_text = wikiRenderer.renderAsText(change_summary, None)
Some quick notes. I wanted to move my existing *.py files for JIRA to a subdirectory. I had a bit of a time figuring this out, so maybe this will help someone when googling on the issue:
p4 sync
mkdir -p jython/workflow
p4 edit *.py
bash # I use tcsh
for f in *.py; do
p4 move $f jython/workflow/$f
done
exit # Back to tcsh
p4 submit
Feature request that certain JIRA dashboards should reload more frequently than every fifteen minutes. So, I cooked up some JavaScript to hide in the announcement banner:
I like to virtualize my workstation using VMWare Workstation. Lately, my Ubuntu (kubuntu) 12.04 guest would exhibit really annoying behavior whereby it would insert lots of extra letters as I typed, seemingly at random.
How can I contemplate moving everything to the cloud, especially Google’s cloud, if services are going to flicker in and out of existence at the whim of Google’s management? That’s a non-starter. Google has scrapped services in the past, and though I’ve been sympathetic with the people who complained about the cancellation, they’ve been services that haven’t reached critical mass. You can’t say that about Google Reader. And if they’re willing to scrap Google Reader, why not Google Docs?
An excellent point.
I recall the first time I adopted a “cloud” service for my technology. It was Flickr. I had managed my photos with my own scripts for years. Others had installed Gallery, which always struck me as limited and ugly. Flickr was new at the time, and I really liked the aesthetic. But, upload all my photos there? They had just been bought by Yahoo. How long is Yahoo going to support the service? I still keep local archives of my photos, but I have thousands of photos shared on Flickr, and how do I know that all those captions, comments, geotags, annotations, sets and collections, that all that data might not one day go down with the slowly sinking-ship that is Yahoo?
What reassured me was the Flickr API. Worst case, I should be able to write a script to pull all that data to a local place somewhere and later reconstruct my online photo archive. If Flickr were going down, someone else would probably write that script better than I could. It is a grim thought, but at least when Flickr dies, there is an exit strategy.
That is one reason why I can sort of trust Google. They’re pretty good about supporting APIs. They’re killing Reader? That’s dumb. But in an instant, Feedly was able to take over my subscriptions from Google for me, and I just had to spend a few minutes learning a somewhat different interface.
It would be nice, though, if, when software was retired, especially cloud software, that it could be open sourced and available for the die-hard users to keep it running on their own servers somewhere. Admittedly, cloud services especially are vulnerable to further external dependencies . . .
You would think, though, that it shouldn’t take much effort on Google’s part to announce that a service has been retired, but they’ll keep it running indefinitely, at least until some point where the vast majority of the users had wandered on to more compelling alternatives. They still keep the Usenet archive around.
And, yes, I rely on DocsDrive. This killing Reader fiasco sounds like an advertising ploy for Microsoft. I rely on DocsDrive, but maybe Excel is a more trustworthy option for the long term . . . ?
If you are naming files on a computer, please use this format. The beauty is that if you list files in “alphabetical order” then these dates get listed in chronological order, because as far as a computer is concerned, the “0” comes before “1” and so forth. (And a year is more significant than a month is more significant than a day of the month . . .)
It is important to have that leading zero! Why? Because we have more than 10 months! Allow me to demonstrate:
If you are interacting with strftime() then what you want to remember is %F!
0-11:38 djh@noneedto ~$ date +%Y-%m-%d
2013-02-27
0-11:38 djh@noneedto ~$ date +%F
2013-02-27
0-11:38 djh@noneedto ~$ date +%Y%m%d%H%M # I sometimes use this for file timestamps but dont tell Randall Monroe
201302271138
For my photographs, I have a directory hierarchy of %Y/%m-%B:
I have been excited to see what might come of Yahoo! with Marissa Meyers at the helm. I am really glad to see that, after years of stagnation, Flickr has been improving. Free food and smartphones for employees? Sounds swell. But the buzz now is that there shall be no more remote work. The only way to be productive is to come to the office and feel the buzz and bounce ideas off coworkers.
I am happy to point out that, while we don’t get free smartphones or free food, my employer does issue remote employees with a hardware VPN device that provides corporate wifi, and a videophone. And we are hiring.
In my experience as a non-management technical professional, there is some virtue both to working from home, and to working at the office. The office presents great opportunities for collaboration: working through ideas and solving problems. Working from home, for some people, provides an excellent space to focus on getting some work done without interruption. You can get more hours of productive work when your commute is shortened to a walk across the dining room, and when there’s no pressure to quit at a certain time to appease the demands of the train schedule or traffic.
For some people, there’s no place like the office . . . some people can do better work from home, some people do not. Managers and executives, the bulk of whose work is meeting with others to make collaborative decisions . . . it seems that they may take several meetings from home and when they get to the office they feel uncomfortable that the busy hum of productive creative energy isn’t located there. I believe that managers who can structure the working and communication practices of their teams to effectively collaborate and track work progress without requiring a physical presence have an advantage over those who can not.
I live near the office and frequently collaborate with my manager, so most days I make the trip in. Sometimes when I need to focus on a project, or work with a remote time zone, I’ll commute to the home office. I have been with Cisco for over five years, now. I spent one of those years in New York, and my tenure here would have been much shorter without the flexibility to telecommute.
This question came to mind the other day. “DSL modem” sounds dumb, because as any geek over the age of 30 knows, a “modem” is a device with MODulates and DEmodulates a digital signal over an analog network. Thus a “Digital Subscriber Line” has no need for modulating and demodulating.
“The term DSL modem is technically used to describe a modem which connects to a single computer, through a USB port or is installed in a computer PCI slot. The more common DSL router which combines the function of a DSL modem and a home router, is a standalone device which can be connected to multiple computers …”
The usage “DSL Modem” is not erroneous. A DSL modem does indeed perform modulation and demodulation. It uses either Quadrature Amplitude Modulation (QAM) or Phase Shift Keying (PSK) modulation. Multiple modulated subcarriers are then combined into an OFDM stream. The distinction between this type of modem and a traditional one is that the traditional one modulates audio frequency signals whereas the DSL modem is upconverted to an RF band. But they both perform modulation and demodulation. The digital signals are not sent as baseband digital signals.
I do not know what all those words mean, but I read that as “a DSL modem is still a modem. It modulates and demodulates a digital signal into the RF band of a telephone line.”
I made my own contribution to Wikipedia’s Talk page:
The distinction between whether your “DSL modem” connects via USB, ethernet, wireless, or provides NAT, sounds like a spurious distinction to me. I interpret and interchange “DSL modem” and “DSL router” as “the network device that bridges your local computing resources to your network service provider.”
But if I have learned anything about nomenclature disputes on Wikipedia, it is that they are not worth the effort.
The current Google Car can operate on city streets autonomously, but it needs someone doing the backend work of getting all the streets mapped out perfectly, figuring out exactly where the lanes are. Then in order to do a truly autonomous taxi service, you’ll want a two-way video linkup for the dispatcher to pilot the car if it gets stuck in some situation like the fire department blocking the street, or to monitor security.
For that reason, the current livery model works really well: a small, local company will service its fleet and its IT needs. The biggest expense, the driver, will be eliminated. This will serve an evolutionary role of a taxi service within a limited service area. This will be mostly shopping trips for car-less people, and “last mile” services to transit connection points, like Taxis serve now. The evolution comes with lower cost: short-haul, off-peak commuter needs, more “last mile” transit service where an autotaxi will be faster and more convenient than the local bus service, but also cheap.
What happens next? “Roaming” agreements among carriers sharing a common technology platform. The service areas of the autotaxi companies grow larger: your local autotaxi can drop you off on a shopping trip to a regional big-box store two towns over and the local autotaxi there can bring you back cheap. Expanded mobility, less reliance on transit.
This doesn’t mean the end of transit. Individual automobiles still require more energy and infrastructure to operate. The autotaxi will dominate short trips, but especially at peak demand, we will need to rely on higher-capacity transit backbones.
The biggest driver of the need for peak-period transit handoff is the capacity limitations of the autotaxi carriers. You simply can not carry everyone, but you want to be a part of the picture. So, yeah, the service gets you from your house to the transit hub, maybe work out relationships with local transit agencies so thaty “last mile” can be served by auto-taxi as a part of the transit fare itself.
The other limitation is for longer-range travel, even a fully autonomous rubber-on-pavement highway system will not be able to match the speed of rail-based or air travel. The autotaxi might drive you fifty miles to the high-speed train station, but then you’ll board the bullet train for LA which will be faster and charge a lower fare.
Anyway, the roaming evolution will mean that we go from local taxi service to regional airport shuttle service, and this will be great for those who live some distance from a long-haul transportation hub who want to make it to/from the airport, &c.
I think autonomous cars are a very reasonable evolution on human-piloted cars, which were a very reasonable evolution on horse-drawn carriages. In the twentieth century we evolved from horses to humans, and in the twenty-first we will evolve even more seamlessly from human to computer.
Our streets didn’t change much from the carriage to the automobile era. They’re wider and too dangerous for people to walk in. I doubt the streets will change much in the autonomous era, except they’ll narrow again and it will be safe to walk, bike, and play in them again.
My other prediction is that the autotaxi will make getting around so convenient, that car ownership will continue to decline. You will see a winners-and-losers scenario in the auto industry: the losers will realize too late just how badly they are in trouble. They will try to spread Fear, Uncertainty, and Doubt as to the safety and wisdom of reliance on autonomous vehicles, just as they try to sell some. The winners will have identified the coming trend and geared their business to serving the needs of autonomous fleet operators, and to those niche consumers for whom autonomous vehicles are not appropriate, or who just love driving their own car. Other winners will include pedestrians, cyclists, the young, the elderly, people with disabilities, suburbanites, night life, and very likely the environment.