Leading items
A conversation with Linus at LinuxCon Japan
Linus Torvalds rather famously does not like speaking in public, so his conference appearances tend to be relatively rare and unscripted. At LinuxCon Japan, Linus took the stage for a question-and-answer session led by Greg Kroah-Hartman. The result was a wide-ranging discussion covering many issues of interest to the kernel development and user communities.
3.0
The opening topic was somewhat predictable: what was the reasoning behind the switch to 3.0 for the next kernel release? The problem, said Linus, was too many numbers which were getting too big. A mainline kernel release would have a 2.6.x number, which would become 2.6.x.y once Greg put out a stable release. Once distributors add their build number, the result is an awkward five-number string. Even so, we have been using that numbering scheme for about eight years; at this point, the "2.6" part of a kernel version number is pretty meaningless.
Once upon a time, Linus said, making a change in the major number would be an acknowledgment of some sort of major milestone. The 1.0 kernel was the first to have networking, 1.2 added support for non-x86 architectures, 2.0 added "kind of working" SMP support, and so on. We used to think that incrementing the major version number required this kind of major new feature, but, in the 2.6.x time frame, we have stopped doing feature-based releases. The current development process works wonderfully, but it has caused the 2.6.x numbering scheme to stick around indefinitely. As we approach the 20th anniversary of the Linux kernel, we had a good opportunity to say "enough," so that is what Linus did.
"3.x" will not stay around forever - or even until the kernel is 30 years old; Linus said he expects to move on around 3.20 or so.
Linus noted that some people were thinking that 3.0 meant it was time to start with big new features (or removal of old code), but that's not what is meant to happen. It is just a number change, no more. Trying to have the kernel be stable all the time has, he said, worked very well; that is not going to change. Greg was clearly happy with this change; he presented Linus with the bottle of whiskey he had promised as a token of his appreciation. After debating opening it on the spot (Greg brought paper cups too, just in case), they decided it might be best to finish the discussion first.
Greg asked: what recent changes did he like the most? Linus responded that he tends to like the boring features, things that people don't notice. Performance improvements, for example; he called out the dcache scalability work as one example. There is no new interface for users, it just makes the same old stuff go faster.
Features and bloat
Is the kernel, as Linus famously said on a 2009 panel, bloated? Linus acknowledged that it is still pretty big; it couldn't possibly run on the machine that he was using to develop it 20 years ago. But even phones are far more powerful than that old machine now, so nobody really cares. The kernel has been growing, but that growth is generally necessary to meet the needs of current hardware and users.
What about the addition of features just for the heck of it - new stuff that is not strictly driven by new hardware? Is that something that we can still do? Linus said that there are certainly developers working on features with no current users, thinking five years or so into the future. Sometimes that work succeeds, sometimes we end up with code that we regret adding. Linus said that he is increasingly insisting on evidence that real users of a feature exist before he is willing to merge that feature.
Greg asked about control groups, noting that a lot of kernel developers really object to them. Linus responded that control groups were a feature that did not initially have a whole lot of users, but they do now. Control groups were initially added for certain specific server setups; few others had any interest in them. Developers were unhappy because control groups complicate the core infrastructure. But control groups have begun to find a lot of users outside of the original target audience; they are, in the end, a successful feature.
Symmetric multiprocessing (SMP) was also, at the beginning, a feature with few users; it was a "big iron" feature. Now we see SMP support used across the board, even in phones. That illustrates, Linus said, one of the core strengths of Linux: we use the same kernel across a wide range of platforms. Nobody else, he said, does things that way; they tend to have different small-system and large-system kernels - iOS and Mac OS, for example. Linux has never done that; as a result, for example, there has never been a distinct cut-down kernel for embedded systems. Since the full kernel is available even at that level, Linux has been a real success in the embedded world.
Embedded systems, world domination, and the next 20 years
Continuing with the topic of embedded systems, Greg asked about the current furor over the state of the ARM tree. Linus responded that developers in that area have been a bit insular, solving their own problems and nothing more. That has resulted in a bit of a mess, but he is happy with how things are working out now. As a result of pushback from Linus and others, the ARM community is beginning to react; the 3.0 kernel, Linus thinks, will be the first in history where the ARM subtree actually shrinks. The embedded world has a history of just thinking about whatever small platform it's working on at the moment and not thinking about the larger ecosystem, but that is changing; this community is growing up.
Many years ago, Greg said, Linus had talked about the goal of "total world domination" and how more applications were the key to getting there. Is that still true? Linus responded that it's less true than it used to be. We now have the applications to a large degree. He also no longer jokes about world domination; it was only funny when it was obviously meant in jest.
At this point, Linus said, we are doing really well everywhere except on the traditional desktop. That is kind of ironic, since the desktop is what Linus started the whole thing for in the first place - he wanted to run it on his own desktop system. We have most of what we should need at this point, including a lot of applications, but the desktop is simply a hard market to get into. It's hard to get people to change their habits. That said, we'll get there someday.
Can we do anything in the kernel to further that goal? Linus responded that he'd been thinking about that question, but he didn't really know. A lot of work has been done to get the kernel to support the desktop use case; kernel developers, after all, tend to use Linux as their desktop, so they are well aware of how well it works. But it's up to the distributors to target that market and create a complete product.
Greg noted that 20 years is a long time to be working on one project; has Linus ever thought about moving on? Linus responded that he really likes concentrating on one thing; he is not a multi-tasker. He's really happy to have one thing that he is doing well; that said, he never had expected to be doing it for this long. When asked if he would continue for another 20 years, Linus said that he'd be fairly old by then. Someday somebody young and energetic will show up and prove that he's really good at this work. That will be Linus's cue to step aside: when somebody better comes along.
What do we need to do to keep the kernel relevant? Linus said that relevance is just not a problem; the Unix architecture is now 40 years old, and it is just as relevant today as ever. Another 20 years will not make a big difference. But we will continue to evolve; he never wants to see the kernel go into a maintenance mode where we no longer make significant changes.
Moments, challenges, and licenses
A member of the audience asked Linus to describe his single most memorable moment from the last 20 years. Linus responded that he didn't really have one; the kernel is the result of lots of small ideas contributed by lots of people over a long time. There has been no big "ah ha!" moment. He went on to describe a pet peeve of his with regard to the technology industry: there is a great deal of talk about "innovation" and "vision." People want to hear about the one big idea that changes the world, but that's not how the world works. It's not about visionary ideas; it's about lots of good ideas which do not seem world-changing at the time, but which turn out to be great after lots of sweat and work have been applied.
He did acknowledge that there have been some interesting moments, though, going back to nearly 20 years ago when Linux went from a personal project to something where he no longer knew all of the people who were involved in it. At that point, he realized, Linux wasn't just his toy anymore. There have been exciting developments; the day Oracle announced that it would support Linux was one of those. But what it really comes down to is persistence and hard work by thousands of people.
Another person asked whether the increasing success of web applications would mean the end of Linux. Linus responded that the move toward the browser has, instead, been helpful to Linux. There used to be a whole lot of specialized, Windows-only applications for tasks like dealing with banks; those are now all gone. When applications run in the browser, the details of the underlying operating system don't matter, at which point it comes down to technology, licensing, and price - all of which are areas in which Linux excels.
The next question was: are you happy with Ubuntu? Linus suggested that Greg should answer that question for a more amusing result. He went on to say that Ubuntu is taking a different approach and is getting some interesting results. It is helpful to have a distributor working with a less technical, more user-centric approach. Ubuntu has been successful with that approach, showing the other distributors a part of the market that they were missing. Greg added that his main concern is that he wants to see the kernel community grow. Things are, Greg said, getting better.
What is the toughest technical problem that Linus has ever had to deal with? Linus answered that the biggest problems he faces are not technical. In the end, we can solve technical issues; we make bad decisions sometimes, but, over time, those can be fixed. When we have serious problems, they are usually in the area of documentation and help from hardware manufacturers. Some manufacturers not only refuse to help us support their hardware; they actively try to make support hard. That, Linus said, irritates him, but that problem is slowly going away.
What is really hard, though, is the problem of mixing the agendas of thousands of developers and hundreds of companies. That leads to occasional big disagreements over features and which code to merge. If Linus loses sleep, it tends to be over people and politics, not technical issues; the interactions between people can sometimes frustrate him. We usually solve these problems too, but the solution can involve bad blood for months at a time.
The linux-kernel mailing list, Linus said, is somewhat famous for its outspoken nature; it is seen as a barrier to participation sometimes. But it's important to be able to clear the air; people have to be able to be honest and let others know what they are thinking. If you try to be subtle on the net, people don't get it; that can lead to developers putting years of work into features that others simply hate. In the long run, Linus said, it can be much healthier to say "hell no" at the outset and be sure that people understand. Of course, that only works if we can then admit it when it turns out that we were wrong.
The final question was about the GPL: is he still happy with the license? Linus said that, indeed, he is still very happy with GPLv2. He had started with a license of his own creation which prohibited commercial use; it took very little time at all before it became clear that it was making life hard for distributors and others. So he has always been happy with the switch to the GPL which, he said, is a fair and successful license. He feels no need to extend it (or move to GPLv3); the license, he said, has clearly worked well. Why change it?
[Your editor would like to thank the Linux Foundation for assisting with his travel to Japan.]
MeeGo 1.2 on the N900
As discussed last week, the ongoing absence of a mass-market handheld device running MeeGo remains a thorn in the project's side. While the set-top box and in-vehicle platforms have signed on major OEM adopters, the general consumer market remains focused on smartphones. Several rumors were circulating around MeeGo Conference San Francisco last month, hinting that Nokia had intended to launch its long-planned MeeGo-based phone at the event, but had been forced to push back the release date again (the stories conflict as to why). Nevertheless, a community effort exists to develop a user-ready MeeGo distribution for the N900 handset, and that team did make a release at the conference.
The project is known as the MeeGo Developer Edition (DE) for N900, and its latest release is named MeeGoConf/SF in honor of its unveiling at the conference. The SF release is based on MeeGo 1.2, but includes a lengthy set of customizations that have been shared back upstream with the MeeGo Handset user experience (UX) project (although not all of the changes are destined to be merged into the official MeeGo releases). The MeeGo DE software is currently in a working and remarkably stable state, though the developers caution that it still should not be used as a daily replacement for the N900's default Maemo OS — there are the to-be-expected bugs, plus the loss of warranty protection. Still, the team has set very user-oriented goals for the project, and it is a pleasant surprise to see just how well some of them work.
Inside the Developer Edition
Jukka Eklund led a panel session presentation about the MeeGo DE release on the final day of the conference. In it, he outlined four use-cases that the team established as its goals: working cellular calls, working SMS messages, a working camera, and a browser working over WiFi. The camera goal was added to the list along the way, he noted, because the camera code was working so well.
In addition to that core functionality, the team decided to include certain add-ons on top of the base MeeGo Handset distribution, including additional applications (the Firefox For Mobile browser, Peregrine instant messenger, the QmlReddit news reader, etc.), improvements to core MeeGo Handset applications (such as a heavily patched and ported-to-QML dialer, and improved volume-control), and some entirely new functionality. The new functionality includes USB modes not yet supported upstream, plus the ability to lock and unlock a SIM card with a PIN code — including a user interface to do so.
Although the MeeGo DE project works entirely in the open, within the existing MeeGo project infrastructure (IRC, mailing lists, and bug tracker), Eklund described its mindset as working like a "virtual vendor
", following the same product management life-cycle as a commercial hardware/software supplier would — but sharing the experience in the open, and providing its changes back to the community.
That is a novel and interesting approach: the Linux Foundation makes much out of MeeGo's ability to shorten product-development times and costs, but typically with real-world examples where a product emerges only at the end. The WeTab team was on hand during the MeeGo Conference keynote to say that their MeeGo-based tablet went from design to production in a scant four months, for example. There is certainly no reason to doubt those numbers, but actually seeing the process documented is even better.
On top of that, Nokia has long defended its right to keep its own UX layer proprietary because it is the only way to "differentiate
" its MeeGo products in the market. That mindset, it seems, has led partly to the project's current UX strategy, where MeeGo releases drop with a set of "reference quality" UXes that vary considerably in usability and polish. It is to the MeeGo DE team's credit that they have improved on the reference UX and sent their patches upstream. Some (such as the dialer) have already been merged; others (such as the theme and UI layer) have a less-than-clear fate awaiting them.
Eklund covered the current status of the SF release, gave some time for fellow panelists Makoto Sugano, Harri Hakulinen, Marko Saukko, and Carsten Munk to make comments, then proceeded to demonstrate the new code running on an N900. The demonstration included the revamped dialer, which he used to call Tom Swindell — the developer who ported the dialer to QML — there in the audience. He also showed off the basic application set, and demonstrated the N900's ability to run the MeeGo Tablet UX (although the tablet interface is not quite usable, as it assumes larger screen real estate).
Testing
As nice as the demo was, I decided that I must try out the code on my own N900 in order to get the full experience. Installation instructions are on the MeeGo wiki, which proved to be the only challenging part of the process — not insurmountable, but probably due for a cleanup. The wiki spreads out download, flashing, memory card preparation, and installation instructions over a large number of pages, some of which are not properly updated to reflect the current state of the release and show signs of multiple editors crossing paths.
For example, the "Install image" table lists two separate processes as the "recommended way
" to install MeeGo DE, and the most-recommended-process is listed as "Dual-boot install," which is a separate entry from "MMC Installation" even though it in fact requires installing MeeGo DE to the phone's MMC card. In addition, there are a few places where the instructions say that a 2GB memory card is sufficient, but this is no longer true as of the SF release, which takes at least 4GB of storage.
The simplest option is to install the uBoot bootloader on the phone (which necessitates activating the unstable Maemo Extras-devel repository first, at least long enough to add uBoot), then to write the raw MeeGo DE image to an un-mounted microSD card. On the SF release download page, this is the large .raw.bz2 file. It is theoretically possible to download the raw image from the N900 itself, then pipe it through bzcat into dd to write it directly to the memory card, but I wimped out and did the card-preparation step from a desktop Linux machine instead.
With the newly-written memory card inserted into the phone, all that remains it to power it on. The uBoot loader will recognize the bootable volume on the card, and load it by default, booting the phone into MeeGo DE. To boot the phone into Maemo again, you simply reboot with the memory card removed, or type run noloboot
at the uBoot prompt at power-on.
Browsing around, I have to say I was quite impressed with MeeGo DE. There are a few areas where DE is a noticeable improvement over vanilla MeeGo Handset UX: the UI widgets are more consistent in appearance, the text is higher-contrast and more readable, and the UI navigation cues (such as the "return to home" button and the application grid's page indicator) easier to pick up on.
Loading applications is slow, although this is no doubt largely dependent on the speed of the memory card I used. Once launched, the interface runs smoothly, including the quick QML transitions. As for the application set itself, the dialer is (as all reports had indicated) a vast improvement over the stock Maemo dialer. It is easier to find contacts and enter numbers, and the on-screen buttons respond faster to finger-presses. There are similar usability improvements in the network and general configuration settings — Maemo's interface for connecting to a new WiFi access point or Bluetooth device borders on arduous. The camera seems faster to focus, and is much faster to adjust to lighting changes and to display the most recent shot.
On the other hand, there are pain points. I could not get the video player to work at all (the SF release ships with both Big Buck Bunny and a stalwart, YouTube-caliber kitty video in 240p); it alternated between displaying only a black screen and playing wildly stretched-out-of-proportion. The audio player has a nicer navigation structure than Maemo's (which relies on large, vertically-scrolling lists oddly displayed only in landscape format), but the playback buttons are tiny — about one-fourth the size of the application launcher buttons.
For some reason, MeeGo DE will sense the device orientation and rotate the display into three of the four possible directions, but not the fourth, upside-down-portrait. I don't have any need to use my phone in upside-down-portrait orientation, but surely it is more useful than upside-down-landscape, which is supported, even though it leaves you with an upside-down keyboard. Finally, I could not get the hang of the top status panel; I kept wanting (or expecting) it to open up a quick-settings menu, like it does in Maemo, for common tasks like enabling or disabling Bluetooth or the data connection.
The third-party applications are a mixed bag, as you might expect. I am a big fan of Firefox For Mobile, and Peregrine seems nice, but I was less impressed with the Kasvopus Facebook app. Some of the apps use their own widget styles, which can be awkward. That is certainly not MeeGo DE's fault, but it raises some questions about QML. One of Maemo's strong suits is that all of the applications use the same toolkit, so feature buttons, sliders, and notifications are easy to pick out. The more applications that write their own crazy QML interfaces, the more potential for user confusion there is.
On the whole, however, the SF release packs a good selection of applications. If you have purchased a phone recently, you no doubt remember that the first step is always removing the glut of sponsored apps and vendor-provided tools that only work with a single service. MeeGo DE doesn't go overboard in its service offerings, it just provides you with a flexible set of utilities: an IM client, not a GoogleTalk-only or Skype-only app; a real browser, not a Yahoo-only search tool.
All that said, MeeGo DE is not yet ready to serve as a full replacement for Maemo. The big blockers at the moment are mobile data and power management, but it will need improvements in SMS and MMS support, call notification, and a few other key usability areas. Eklund and the other team members are straightforward about the intended audience of the DE builds: they are a showcase for MeeGo on handsets, they demonstrate that the code can run comfortably well even on hardware several years old, and they allow the community to push the envelope where commercial phone-makers are not.
On all of those fronts, one must consider the MeeGo DE SF release a success. I experienced no crashes, and I both made and received phone calls without incident. Better yet, the UX layer (including both the interface and the application suite) is an improvement over the reference design. If the MeeGo steering group wants to keep the reference UXes in an unpolished, demo-only state, they still can. I personally feel like that is a mistake: no OEM is prevented from replacing them in order to differentiate its products, but the gadget-hungry public always sees a demo-quality design first. But MeeGo DE for the N900 shows that the project doesn't have to continue down that road, because the community is ready and willing to pitch in.
[ The author would like to thank the Linux Foundation for sponsoring his travel to the MeeGo Conference. ]
PGCon 2011, the PostgreSQL developer conference
PGCon is the international PostgreSQL contributor conference, held in Ottawa, Canada for the last four years. It serves the same purpose for the PostgreSQL community that the Linux Plumbers Conference does for Linux, attracting most of the top code contributors to PostgreSQL and its extensions and tools. Nearly 200 contributors from around the world attended this year, including hackers from as far away as Chile and the Ukraine, as well as a large contingent from Japan.
PostGIS knows where you are
"
PostGIS has been advancing rapidly, and according to Ramsey is now advancing GIS itself. "
The first requests for raster support and volumetric objects have come from archaeologists and water system planners. These people need to represent three-dimensional objects underground; not only their shape but their relative position in three dimensions. Another area where volumetric shapes are essential is airspace; one attendee demonstrated an SVG and PostGIS application which showed Ottawa airport traffic in real time.
"4D indexing" means indexing objects not just in space but in time as well so that people and objects can be tracked as they move. The proliferation of public video cameras and geographic data from our cell phones is supplying a flood of data about motion as well as location. Currently no software exists which can represent this data and search it on a large scale. The PostGIS project plans to develop it.
The primary limitations PostGIS is struggling with today are write-scalability and lack of parallel processing for GIS data. The latter is an issue because rendering, transforming, and testing spatial objects — such as for answering the question "which building parcels border this one?" — are CPU-intensive. Currently you cannot use more than one core for a single GIS query.
Recently Foursquare had to turn away from PostGIS because it couldn't keep up with the hundreds of thousands of updates per second that the service gets. Both the transactional overhead and the time required to write serialized geometric objects were too much. They switched to custom software on top of Lucene and MongoDB. Ramsey hopes that PostgresXC will supply clustering that PostGIS can use.
Since PGCon consists of 34 sessions in three parallel tracks, I couldn't attend everything. Some of the most interesting talks included:
There were many other good sessions which I had no time to attend. However, I did make it to:
Possibly the most popular sessions at PGCon was Tom Lane's review and explanation of how the PostgreSQL query planner works. Tom Lane is the lead hacker on the PostgreSQL project, and his session, which ran overtime into a second session, was packed.
The query planner is part of answering database requests; it decides how to retrieve data most efficiently. Since even a moderately complex query can involve making decisions between millions of possible execution paths, the query planner code itself is a bit ... opaque.
Some of PostgreSQL's more than 25,000 lines of planner code is 20 years old. Many parts of the planner are best described as "kludgy," but Tom Lane and other lead developers are reluctant to modify them because they currently work fairly well. More importantly, the query planner is a holistic system with parts which interact in ways which are hard to predict. This has led to a lack of developers interested in making improvements due to the difficulty of the task. Lane gave his talk in hopes of changing that.
The planner is the most complex part of the four stages which go into answering a PostgreSQL query. First, the Parser decides the semantic meaning of the query string, separating out object identifiers, operators, values, and syntax. Second, the Rewriter expands views, rules, and aliases. Thirdly, the Planner chooses indexes, join plans, "flattens" subselects, and applies expressions to different tables in order to find the best query plan to execute. Lastly, the Executor executes the plan which the planner prepares.
The three goals of the planner are: to find a good (fast) query plan, to
not spend too much time finding it, and to support PostgreSQL's
extensibility by treating most operators, types, and functions as black box objects. The planner does this by forming a tree of "plan nodes," each of which represents a specific type of processing to be performed. Each node takes a stream of tuples (i.e. rows) in and outputs a stream of tuples.
Plan nodes consist of three types: Relation Scans, which read data from tables and indexes, Join nodes, which join two other nodes, and Special Plan nodes, which handle things like aggregates, functions scans, and UNIONs of two relations. Each node has a data source (the input stream), output columns and calculations, and qualifiers or filters. Nodes also have a set of statistics in order to estimate how much time it will take to do all of these things. Plan nodes are mostly autonomous from each other, but not always; for example, the planner takes output sort order into account when considering a merge join.
The first phase the planner goes through is an attempt to simplify the query. This includes simplifying expressions, "inlining" simple SQL functions, and "flattening" subselects and similar query structures into joins. This simplification both gives the planner the maximum flexibility in how to plan the query, and reduces repetitive re-processing for each tuple.
The planner then goes through Relation Scan and Join planning, which
roughly correspond to the FROM and WHERE portions of a SQL query. Of this
planning stage, the most expensive part is Join planning, since there is a
factorial expansion of the number of possible plans based on the number of
relations you are joining (i.e. n tables may be joined in n! ways). For a simple to moderately complex query, the planner will attempt to plan all possible execution plans, or Paths, it could take to perform the query.
For a very complex query — for example, one with a 25-table Join — the planner needs to resort to approximate planning methods, such as the "GEQO" genetic algorithm for creating query plans built into PostgreSQL. GEQO does not tend to produce very good query plans, however, and really needs improvement or replacement with another algorithm. There is also a set of simplification rules for complex Join trees, but since these are not estimate-based, they sometimes cause bad plans.
All of the preceding is needed to determine the possible Paths for
executing the query, so that the planner can calculate the cost of each
path and find the "cheapest" plan in terms of estimated execution time.
This estimate is created by combining a set of statistics about tables and
indexes, which is stored in the pg_statistic system table, with a set of estimation functions keyed to operators and data types. For each join or scan with a filter condition (such as a WHERE clause), the planner needs to estimate the selectivity of the conditions so that it can estimate the number of tuples returned and the cost of processing them.
Using both a histogram and stored selectivity estimates, the planner can usually calculate this fairly accurately, but often fails with complex expressions on interdependent relations or columns, such as AND/OR conditions on columns which are substantially correlated. The other large omission from this design which causes planning issues is the lack of estimate calculations for functions.
Lane also went over the need for major future work on the planner. Two
specific issues he mentioned were the need for parameterized scans for
subselects and functions, and the need to support estimates on Foreign Data
Tables in PostgreSQL 9.1. He also went over in detail some of the files
and code structures which make of the query planner. His slides
[PDF] are available on the PGCon website.
The biggest change to the conference this year was the multiple summits and meetings held alongside and after the conference. Tuesday was the Clustering Hacker's Summit, Wednesday was the Developers' Meeting, and Saturday was the PL/Summit, a new event. The goal of these all-day meetings was better coordination of the hundreds of contributors in more than 20 countries who work on PostgreSQL
The Clustering Summit was the second such event, sponsored by NTT Open Source, who also sponsored the previous summit in Tokyo in 2009. PostgreSQL supports multiple external replication and clustering projects, many of them having only one or two developers. The thirty attendees included developers on the Slony-I, Bucardo, Londiste, pgPool-II, and GridSQL projects. Several NTT and EnterpriseDB staff there were representing a newer project, called PostgresXC, intended for write-scalable clustering.
For the last three years, twenty-five to thirty of the most prominent and prolific PostgreSQL code contributors have met in order to discuss development plans and projects. This is what has allowed the project to work on several features, such as binary replication, which have taken several years and several releases to complete. While there was a great deal of discussion at the summit, few decisions were made. The Developers' Meeting did set a schedule for 9.2 development, which will be very similar to last year's schedule, with CommitFests on June 15th, September 15th, November 15th, and January 15th, and a release sometime in the summer of 2012.
This year's PGCon also included the first summit of contributors to the various Procedural Language (PL) projects, and included developers of PL/Perl, PL/Python, PL/Lua, PL/PHP, PL/Tcl, and pgOpenCL.
There were some other announcements at PGCon. Marc Fournier, one of the founding Core Team members for PostgreSQL, retired from the Core Team. This follows several changes on the Core Team in the last year, including the retirement of founding member Jan Wieck and the addition of major contributor Magnus Hagander. Postgres Open, a new business-oriented PostgreSQL conference, was announced by its organizer.
Lest you think that PGCon was entirely brainiac database hacking, the most popular track was probably the unofficial Pub Track held at the Royal Oak Pub near campus, where Aaron Thul introduced The PostgreSQL Drinking Game [PDF] at the conference. Next year you could go to PGCon, and learn both how to hack databases and how to drink to them.
Everyone is being followed, all the time
",
said
Paul Ramsey, beginning his talk on PostGIS. Ramsey is the creator of PostGIS, the geographic data
extension to PostgreSQL, and works for OpenGeo as an architect. He talked
about how changes to the way we live are causing an explosion in the
adoption of geographic and spatial technologies. "
PostGIS knows where you are. Should you be worried? No. Should
you be worried about Apple? Probably
", he said.
He also went briefly into the
history of the Geographic Information System (GIS), which was invented in Canada, and then talked about the PostGIS project development.
We ran out of things to implement in the standard, so we moved to a new standard that had more things to implement.
" Plans for PostGIS 2.0 include raster support, 3D and 4D indexing, and volumetric objects.
Conference Sessions
Hacking the query planner
Developer Summits
Conclusion
Page editor: Jonathan Corbet
Next page:
Security>>