In this guest blog series, Vadim Rutkevich from StiltSoft takes a closer look at what is possible combining their products with our Metadata for Confluence add-on: Analysis, filtration, and visualization of page metadata.
Atlassian Confluence provides a rich set of capabilities for management of content. Their approach to collaborative editing, as well as extended ecosystem of add-ons, allows your team to quickly and easily achieve the defined objectives.
Those people who use Confluence extensively use different methods to categorize and structure pages for quick accessibility and easy navigation. Here’s where page metadata can help you with this.
By default, you can use the native Page Properties macro which allows you to specify custom key/value pairs for specific content pages. This solution allows you to keep metadata within page content. The Page Properties Report macro further processes labels of pages with metadata and generates the index list of pages matching the defined criteria.
But it is not perfect. All the time you need to manually enter text values and there is a high probability that you can misspell some word or enter something that will differ from the original value. And here you need a more data-consistent and comprehensive solution. Metadata for Confluence add-on from Communardo can become the solution that can help you with getting control over page metadata.
After expansion of content in your Confluence, one day you may want to get insight into its structure, and find out some criteria to classify or categorize it. And Table Filter and Charts add-on from StiltSoft can become a solution that will help you with content filtration, aggregation, and visualization.
Sometimes you may want to get the aggregated view of your page metadata, and here the Pivot Table macro from Table Filter and Charts add-on will help you with this.
While editing the page, enter ‘{Pivot Table}’ and then move the Metadata Overview and Table Filter macros within it.
After saving the page, let’s build a pivot table. Select the Addon Name as Row Labels, and Sprint Status as Column Labels. So here’s the aggregated view of sprints against the add-on and sprint status.
The next thing which we can do is a comparison of estimated velocity for all the closed two-week sprints for different add-ons. This will help us to get insight into team performance and look up for possible improvements in the development process.
We re-build our pivot table and select the Addon Name column as Row Labels, change the calculated column to Estimated Velocity, and switch to the Average, Min and Max operations.
As you may notice there are no significant deviations in the estimated velocity between different teams.
Author: Vadim Rutkevich, StiltSoft