Co·sent: shared perception; joint knowledge; collective intelligence.
Collaborating with nine companies across five countries, the Plone Intranet Consortium moved from plan to working code within months. How did we do it?
Fast-forward to summer 2015: we've already done a Mercury "technology preview" release and are now in feature freeze, preparing for the 1.0 release, codename Venus, this summer.
As you can see in the video, for all of us it's very important to be part of the open source community that is Plone.
At the same time, we use a different process: design driven, which impacts our code structure and the way integrators can leverage the Plone Intranet platform.
Sharing and re-use
All of our code is open source and available on Github. In terms of re-use we have a mixed strategy:
First of all it's important to realize we're doing a design-driven product, not a framework. We have a vision and many components are closely integrated in the user experience (UX). From a UX perspective, all of Plone Intranet is an integrated experience. Sure you can customize that but you have to customize holistically. You cannot rip out a single feature and expect the UX for that to stand on it's own.
In the backend the situation is completely different. All the constituent packages are separate even if they live in one repo and one egg. You can install ploneintranet.microblog without installing the whole ploneintranet stack: i.e. the whole ploneintranet source needs to be there (at the python level) but you can load only the ploneintranet.microblog ZCML and GS and you'll be fine. All our packages have their own test suites which are run independently. Of course you need activitystream views to display the microblog - and that's frontend UX and one of the most complex and integrated parts of our stack, with AJAX injections, mentions, tagging, content mirroring and file preview generation.
Another example is search: a completely re-usable backend but you'd have to provide your own frontend. Our backend is pluggable - we currently support both ZCatalog and Solr engines and expect to also support Elastic Search in the future. We have documented our reasons for not reusing collective.solr.
Design and user experience are key
We don't believe that loosely coupled components with independent developer-generated frontends create a compelling user experience. Instead of working from the backend towards the frontend, we work the other way around and focus on creating a fully integrated, beautiful user experience.
The downside of that is that it becomes more difficult to reuse components independently. That's a painful choice because obviously it reduces open source sharing opportunities. We do open source for a reason, and you can see much evidence that we care about that in the level of our documentation, in our code quality, and in the careful way we've maintained independent backend packages, including listing component package dependencies, providing full browser layer isolation and most recently providing clean uninstallers for all our packages.
Plone Intranet is a huge investment, and we're donating all our code to the Plone community. We hope to establish a strong intranet sub-community while at the same time strengthening the vibrancy of the Plone community as a whole.
The Change Factory is a platform where environmental activists share knowledge
The site offers an online knowledge platform for the entire Dutch environmental movement. The Change Factory is structured using a classical watch-learn-do approach:
- The Knowledge Base provides searchable background information on various topics.
- The Toolbox offers step-by-step help in organising a new initiative.
- The Network connects activists with each other and facilitates learning by sharing practical experiences.
The site is currently only available in Dutch. See the Dutch intro video to get a feel:
Knowledge Management in action
Effective knowledge management and sharing of knowledge results not just from publishing documents ("explicit knowledge"). Much learning and knowledge sharing results from the interactions between people, in which they exchange "tacit knowledge" - stuff you know but didn't know you knew.
The Change Factory is designed to support both aspects, in the form of a knowledge base with documents (explicit knowledge) on the one hand, and a social network geared towards conversation and interpersonal contact (implicit knowledge sharing) on the other hand. A toolbox with learning tools connects both aspects into a learning resource.
As a knowledge platform, the site supports a cycle of knowledge flow and knowledge creation following the well-known SECI model:
- Socialization: sharing knowledge in conversation.
- The network promotes direct contact and dialogue between environmental activists, by not only describing the what of an activity, but also who the organisors are and presenting their contact info. Additionally, a discussion facility on the network makes it easy to exchange experiences.
- Externalisation: writing down your knowledge.
- The network is built around the exchange of "experiences", documenting an action format, so people can learn from successes in another town and replicate the format. This helps to articulate tacit knowledge into explicit knowledge.
- Combination: searching and finding knowledge.
- The searchable knowledge base, organised by theme, facilitates the re-use of knowledge. Documented action formats in the network all follow the same stepwise model, making it easy to mix and match steps from various formats in creating your own activity.
- Internalization: turning learning into action.
- The toolbox with process support documentation helps you assimilate best practices by bringing them into practice, following a simple four-step plan. Here, you absorb explicit know-what knowledge and internalize that into tacit know-how.
De combination of these approaches turns The Change Factory into much more than just a set of documents. The site is a living whole where people communicate and help each other to become more effective, in facilitating the transition of our society to a more sustainable world.
Following the initial project brief, Cosent performed design research in the form of a series of interviews with intended users of The Change Factory. These interviews inquired into the way activists collaborate and communicate in practice, focusing on what people actually need and how a online platform could contribute to their success.
What emerged from the research, is that nobody wants more long documents. Nor was there any need for a marketplace-like exchange of services and support. Rather the interviewees articulated a need for quick-win snippets of actionable knowledge that can immediately be put into practice.
Based on the outcomes of this research, we introduced the Network aspect of the platform: a social network centered on the sharing of succesful action formats, structured in a way that facilitates dialogue, direct contact, and re-use of proven formats across multiple cities.
After articulating this concept into an interaction design and visual design, Cosent built the site in Plone CMS. An initial crew of editors seeded the site with initial content. Recently, the platform was publicly unveiled and immediately attracted scores of active users.
Site: The Change Factory
Concept: Milieudefensie and Cosent.
Interaction design and realisation: Cosent.
Visual design and logo: Stoere Binken Design.
Sprinting in München transformed both the team and the code base of Plone Intranet.
The Plone Intranet project represents a major investment by the companies that together form the Plone Intranet Consortium. Last week we gathered in München and worked really hard to push the Mercury milestone we're working on close to an initial release.
Mercury is a complex project, challenging participants out of their comfort zones in multiple ways:
- Developers from 6 different countries are collaborating remotely, across language barriers and time zones.
- People are collaborating not within their "own" home team but across company boundaries, with people whom they haven't really collaborated with before, and who have not only a different cultural background but also a different "company coding culture" background.
- The backend architecture is unlike any stack peope are used to work with. Instead of "normal" content types, you're dealing with async and BTrees.
- The frontend architecture represents a paradigm shift as well, requiring a significant change in developer attitude and practices. Many developers are used to work from the backend forward; we are turning that on it's head. The design is leading and the development flow is from frontend to backend.
So we have a fragmented team tackling a highly challenging project. The main goal we chose for the sprint therefore, was not to only produce code but more importantly to improve team dynamics and increase development velocity.
Monday we started with getting everybody's development environments updated. Also, Cornelis provided a walkthrough of how our Patternslib-based frontend works. Tuesday the marketing team worked hard on positioning and communications, while the developer teams focused on finishing open work from the previous sprint. As planned, we used the opportunity to practice the Scrum process in the full team to maximize the learning payoff for the way we collaborate. Wednesday we continued with Scrum-driven development. Wednesday afternoon, after the day's retrospective, we had a team-wide disussion resulting in a big decision: to merge all of our packages into a single repository and egg.
The big merge
Mercury consists of 20 different source code packages, each of which had their own version control and their own build/test tooling. This has some very painful downsides:
- As a developer you need to build 20 separate testing environments. That's a lot of infrastructure work, not to mention a fiendishly Jenkins setup.
- When working on a feature, you're either using a different environment than the tests are run in, or you're using the test environment but are then unable to see the integrated frontend results of your work.
- Most user stories need code changes across multiple packages, resulting in multiple pull requests that each depend on the other. Impossible to not break your continuous integration testing that way.
- We had no single environment where you could run every test in every package at once.
So we had a fragmented code base which imposed a lot of infrastructure work overhead, created a lot of confusion and cognitive overhead, actively discouraged adequate testing, and actively encouraged counterproductive "backend-up" developer practices instead of fostering a frontend-focused integrative effort.
Of course throwing everything into a big bucket has it's downsides as well, which is why we discussed this for quite some time before taking our decision.
The main consideration is code re-use and open source community dynamics. Everybody loves to have well-defined, loosely coupled packages that they can mix and match for their own projects. Creating a single "big black box" ploneintranet product would appear to be a big step backward for code re-use.
However, the reality we're facing is that the idea of loosely coupled components is not how the code actually behaves. Sure, our backend is loosely coupled. But the frontend is a single highly integrated layer. We're building an integrated web application, not a set of standalone plugins.
We've maintained the componentized approach as long as we could, and it has cost us. A good example is plonesocial: different packages with well-defined loosely coupled backend storages. But most of our work is in the frontend and requires you to switch between at least 3 packages to make a single frontend change.
In addition, these packages are not really pluggable anymore in the way Plone devs are used to. You need the ploneintranet frontend, you need the ploneintranet application, to be able to deliver on any of it's parts. Keeping something like plonesocial.activitystream availabe as a separately installable Plone plugin is actively harmful in that it sets wrong expectations. It's not independently re-usable as is, so it should not be advertised as such.
We see different strategies Plone integrators can use ploneintranet:
You take the full ploneintranet application and do some cosmetic overrides, like changing the logo and colours of the visual skin.
You design and develop a new application. This starts with a new or heavily customized frontend prototype, which you then also implement the backend for. Technically you either fork and tweak ploneintranet, or you completely build your own application from scratch, re-using the ploneintranet parts you want to keep in the backend via library mode re-use, see below.
Library mode cherry-picking.
You have a different use case but would like to be able to leverage parts of the ploneintranet backend for heavy lifting. Your application has a python dependency on those parts of ploneintranet you want to re-use: via ZCML and GenericSetup you only load the cherries you want to pick.
Please keep in mind, that this situation is exactly the same for the companies who are building ploneintranet. We have those same 3 options. In addition there's a fourth option:
Your client needs features which are not currently in ploneintranet but are actually generally useful good ideas. You hire the ploneintranet designer to design these extensions, and work with the ploneintranet consortium to develop the new features into the backend. You donate this whole effort to ploneintranet; in return you get reduced maintenance cost and the opportunity to re-use the ploneintranet application as a whole without having to do a full customization.
You'll have to join the Plone Intranet Consortium in order to pursue this fourth strategy. But again, there's no difference for current members: we had to join as well.
To make individual component re-use possible, we've maintained the package separation we already had - ploneintranet may be one repository, one egg, but it contains as separate python packages the various functional components: workspace, microblog, document preview, etc. So we do not subscribe to JBOC: Just a Bunch of Code. We don't throw everything into one big bucket but are actively investing in maintaining sane functional packages.
A variant of cherry-picking is, to factor out generically re-usable functionality into a standalone collective product. This will generally only be viable for backend-only, or at least frontend-light functionality, for the reasons discussed above. A good example is collective.workspace: the ploneintranet.workspace implementation is not a fork but an extension of collective.workspace. This connection enables us to implement all ploneintranet specific functionality in ploneintranet.workspace, but factor all general improvements out to the collective. That has already been done and resulted in experimental.securityindexing.
On Thursday we announced a feature freeze on the whole stack, worked hard to get all tests to green and then JC performed the merge of all ploneintranet.* into the new ploneintranet consolidated repo. Meanwhile Guido prepared the rename of plonesocial.* to ploneintranet.*. On Friday we merged plonesocial into ploneintranet and spent the rest of the day in hunting down all test regressions introduced by the merges. Because we now have a single test runner across all packages that meant we also identified and had to fix a number of test isolation problems we hadn't seen before.
Friday 20:45 all tests were finally green on Jenkins!
We still have to update the documentation to reflect the new consolidated situation.
In terms of team building this sprint has been phenomenal. We've been sprinting on ploneintranet for five months now, but this was the first time we were physically co-located and that's really a completely different experience. We already did a lot of pair programming remotely, but it's better if you are sitting next to each other and are actually looking at the same screen. Moreover feeling the vibe in the room is something you cannot replicate remotely. The explosion of energy and excited talking after we decided to do the consolidation merge was awesome.
On top of that we now have a consolidated build, and I can already feel in my own development the ease of mind from knowing that the fully integrated development environment I'm working in is identical to what all my team members are using, and is what Jenkins is testing. Instead of hunting for branches I can see all ongoing work across the whole project by simply listing the ploneintranet branches. Reviewing or rebasing branches is going to be so much more easier.
On top of all that we also made significant progress on difficult features like the document previewing and complex AJAX injections in the social stream.
We started with a fragmented team, working on a fragmented code base. We now have a cohesive team, working on a unified code base. I look forward to demoing Mercury in Sorrento in a few weeks.
IntraTeam conference highlights ongoing disruption in intranet market.
Never mention the word "intranet" on a date, or in any conversation for that point. It bores people to death.
Ask a whole auditorium of intranet managers: what emotions does your intranet evoke? Beyond indifference, the answers you get are: resignation, frustration, rage, desperation, contempt.
Intranets are dying
Throw an intellectual heavy weight like Dave Snowden into that mix, and he'll happily challenge some of the audience's dearly held beliefs.
"The future is distributed. I don't believe, in five year's time, there'll be significant presence at any conference to do with intranets."
"The intranet is going to die. We're moving to fully distributed systems. The sooner you start shifting the better."
Snowden thinks, apps are much better at playing this new field of distributed information and knowledge management.
WCM best positioned to take advantage
On first sight, the Office 365 cloud offering looks similar to the SharePoint on-premise version. But if you look closer, you'll find that complex portals or integrated digital workplaces cannot be migrated to the cloud. You need a complete re-design to bring your existing intranet to Office 365.
"Microsoft doesn't care for the on-premise customers for the next five years. There will be a lot of customers looking for alternatives to SharePoint, once they realize that SharePoint 2013 is dying."
Web Content Management systems and portal systems are already better than SharePoint for building complex systems with multi-language capabilities and integrations with other systems. The departure of SharePoint from the intranet market will accelerate the search for alternative solutions. The departure of the dominant supplier leaves a lot of extra oxygen available for the rest.
Return of the portal
What we're seeing here, is a transition from the outdated intranet concept, to a new digital workplace paradigm.
"We're moving into a radical new approach to software which is fully distributed. The only interesting question at the moment is: what is the glue that holds it all together? That is probably the big strategic area to grasp."
Paradoxically, one of the contenders for the integrative part that brings everything together is a revival of the portal concept.
The screenshot below is from James Robertson's presentation and shows how a HR page becomes dramatically more useful by pulling relevant, personalized information from various back-office systems.
The blurring is a design feature by the way. There's a button to un-blur your pay info when nobody is watching over your shoulder.
Seeing that screenshot, Kristian Norling tweeted:
And that's a something that Perttu Tolvanen also referred to.
This is not your grandfather's portal anymore, though. Snowden again:
"The other point is, things will be loosely coupled. This, by the way, is where object orientation comes in big time, but true object orientation."
That's a vision similar to what Prahalad and Krishnan in their 2008 book The New Age of Innovation.
Connecting heterogeneous networks of loosely coupled business objects is core business for web technology. Open standards, open source approaches are especially well placed, to thrive in such environments.
Well-researched audience personas communicate deep insights
The meeting was a disaster. It was supposed to be a "rubber-stamp" type of event, just to discuss a few questions, after the proposal and the budget had been approved already. After the design had been approved half a year ago, already.
Unfortunately, some key questions could not be answered.
3 questions to check your communications vision
To validate any communications project, three basic questions are useful:
- Who are we serving with this project?
- What will be the difference for them, a year from now, if we succeed?
- How will we interact with our audience and improve their lives?
Boom. The client has some ideas, just enough to invalidate the current design. But at the same time, those ideas are vague enough that it is not possible to articulate design directions, or make decisions.
The standard Dutch reflex to such situations is: we need to have some more meetings to discuss this (codespeak for: let's try and negotiate our differences away).
That reflex is wrong. You don't need to negotiate opinions. You need more data.
Personas synthesize research based insights
If you're building web sites, or any service that people interact with, you need a clear picture of who your intended audience is. You don't get that picture by sitting at your desk. You need to get out of the building and interview real, live humans to find out what makes them smile. To find out how exactly your service can add value to their lives.
Once you've done that research, you can then summarize and synthesize the findings into audience personas, profiles of fictional people that represent a typical customer. Persona's show describe key aspects of a person's life, goals and behaviors. Below are some example personas, based on Mailchimp user research:
Those personas then drive your design. They enable you to empathize with your audience. They enable you to create a design that touches people in their hearts. And they enable you to overcome the sometimes weird ideas of people holding purses, who happen to like the color "red", by grounding design decisions in a solid, data-driven understanding of audience needs and preferences.
Instead of making stuff up about your customers, you can ask them for the five key insights you need to focus your strategy:
What business need drives them to search for a solution in your market space?
What do buyers expect to achieve by implementing your solution?
Which were the reasons to not buy your solution? Make sure to also interview non-customers!
Who is involved in the decision making process, what resources are trusted?
Which factors are key in weighing alternative options and making a purchasing decision?
Gaining a deep understanding of these five points is key not only for buyer personas, but for design personas in general.
If you've articulated these five insights, you'll know the answers to the who, what and how questions. You'll know who you're serving, what value you're adding, and how this fits into the lives of your audience. The rest is execution.
Don't be fuzzy.
If you don't know the who, what and how: acknowledge you don't know enough.
You don't need to guess.
Persona creation is a proven methodology for crafting data-informed audience profiles.
Just do it.
Doing the legwork to develop true insights is hard work. But it's actionable, not magic.
Use it or lose it.
Services and businesses that invest in a solid evidence-based understanding of their audiences' needs, will out-shine those who design and market is based on hunches.
- Stay Sharp
Receive updates on innovation, knowledge management and social technology every two months.
- Tag Cloud
- agile complexity crowdsourcing design design research enterprise 2.0 foresight innovation internet of things intranet knowledge knowledge management management open source organizational learning plone plonesocial roadmap semantic web social business social network analysis social networking sustainability technology