Introducing Sigma Computing Embedded Analytics for Confluence

We're excited to announce the launch of Sigma Computing Embedded Analytics for Confluence - our newest connector that brings live data visualizations and analytics from Sigma Computing directly into your Confluence pages.

If your team uses Sigma to analyze data from Snowflake, Databricks, BigQuery, Redshift, or other data warehouses, you can now embed those insights directly into your Confluence documentation, project pages, and team wikis without switching between tools.

One Workspace. All Your Data.

Connect Sigma to Snowflake, BigQuery, Salesforce, Google Analytics, and other data sources to embed live analytics directly in Confluence. Give your teams unified access to data insights without leaving their workspace.

One workspace. All your Data - Sigma connects to multiple data sources including Salesforce, Google Analytics, Snowflake, and more

Why We Built This

Over the past year, we've talked to hundreds of teams using Confluence as their central knowledge hub. A common pattern emerged - teams building data-driven products and processes wanted to bring their analytics directly into their documentation and project spaces. They were using tools like Amazon QuickSight, Databricks, Datadog, and Kibana, and we built connectors to help them embed those visualizations seamlessly.

But many teams told us they relied on Sigma Computing as their primary analytics platform, especially those with data in Snowflake or other cloud data warehouses. They loved Sigma's intuitive interface and powerful analytics capabilities, but embedding that content in Confluence was either impossible or required complex workarounds.

That's why we built this connector - to give Sigma users the same seamless embedding experience we've provided for other platforms.

What You Can Embed

The app supports embedding Sigma content directly into Confluence pages:

  • Complete workbooks - Embed entire interactive workbooks with all pages and visualizations
  • Individual workbook pages - Share specific pages from multi-page workbooks
  • Workbook elements - Embed individual charts, tables, and other visualization elements
  • Data models - Share data exploration views (requires secure embedding)
  • Reports - Display formatted reports (requires secure embedding)

Simply type /sigma in any Confluence page, paste your Sigma embed URL, and the content appears live on your page. The app automatically detects whether you're using public or secure embedding and handles authentication accordingly.

Public and Secure Embedding Options

The app supports both of Sigma's embedding methods to match your security requirements:

Public embedding works great for content you want to share broadly with external stakeholders or when you don't need user-level permissions. Anyone who can view the Confluence page can see the embedded analytics without authentication.

Secure embedding is designed for internal use cases where you need to control access based on user identity and Sigma team permissions. Users are authenticated via JWT-signed URLs, ensuring they only see content their Sigma teams can access. This is ideal for sensitive data where access control matters.

The best part? Content authors don't need to think about which method to use. They simply paste the Sigma URL and the app handles the authentication automatically based on how the embed was configured in Sigma.

Automatic User Management

For teams using secure embedding, we've built an automatic user management feature that eliminates the manual work of provisioning users in both systems.

Here's how it works - Confluence admins map Confluence groups to Sigma teams in the app settings. When a user views a page with embedded Sigma content for the first time, the app automatically creates their Sigma account and assigns them to the appropriate teams based on their Confluence group memberships.

This means:

  • No manual user provisioning between systems
  • Users automatically get the right level of access based on their existing Confluence groups
  • Team memberships stay synchronized as users join or leave groups
  • You can configure a fallback team for users who don't match any specific group mapping

For teams that prefer tighter control, manual user management is still fully supported - you provision users in Sigma first, and the app authenticates them based on their email address.

Getting Started

Installation takes just a few minutes:

  1. Install Sigma Computing Embedded Analytics for Confluence from the Atlassian Marketplace
  2. If you're using secure embedding, configure your Sigma client credentials in the app settings
  3. Get your embed URL from Sigma (public link or workbook path)
  4. Type /sigma in any Confluence page and paste your embed URL

That's it. Your Sigma analytics appear live on the Confluence page, synchronized with your source data in Snowflake, Databricks, or whichever data warehouse you're using.

For detailed setup instructions, visit our Sigma Analytics Setup Guide.

Built on the Same Foundation

This connector is built on the same security-first, performance-focused architecture as our other Confluence connector products. We use Atlassian's Forge platform, which means your data flows directly between Sigma and your users' browsers - nothing passes through our servers. The app simply facilitates authentication and provides the embedding interface.

We're following the same approach that made our QuickSight, Databricks, Datadog, and Kibana connectors successful - focus on security, performance, and making the embedding experience as simple as possible for content authors.

Try It Today

If your team uses Sigma Computing and Confluence, we'd love for you to try the connector. Install it from the Atlassian Marketplace and let us know what you think.

Have questions or run into any issues? Check out our setup guide or reach out at [email protected]. We're here to help.

Validate Atlassian Forge Manifest in VS Code

The manifest.yml file is the blueprint of any Atlassian Forge app, defining modules, permissions, and app identity. As an app grows, maintaining a valid manifest becomes critical.

Traditionally, developers rely on the @forge/cli tool to check for errors, specifically running commands like forge lint. Under the hood, the CLI uses the @forge/manifest package to validate your file against a strict specification. However, this workflow is reactive: you make a change, switch to the terminal, run a command, and wait for feedback.

A better experience is real-time validation directly inside your editor. This allows you to catch errors as you type and leverages features like autocompletion for complex module definitions.

The Challenge: Connecting VS Code to the Schema

To achieve this in Visual Studio Code, we rely on the Red Hat YAML extension. As the standard for YAML language support in VS Code (powered by the widely adopted yaml-language-server), this extension supports associating JSON schemas with YAML files to provide validation and IntelliSense.

However, there is a small hurdle: while Atlassian publishes the schema within the @forge/manifest npm package, they do not expose a public, stable URL for the schema file itself on their developer documentation.

The Solution: jsDelivr

To bridge this gap, we can use jsDelivr, a public, production-grade CDN that aggregates multiple providers to ensure high availability and performance. It allows us to access files directly from the published @forge/manifest package via a URL. By pointing the YAML extension to this URL, we can inject Atlassian's official validation rules directly into VS Code.

Step-by-Step Guide

1. Install the YAML Extension

First, ensure you have the YAML extension by Red Hat installed. You can find it in the VS Code Marketplace or by searching for "YAML" in the Extensions view.

2. Configure Schema Association

You can associate the remote schema with your manifest.yml file using one of two methods supported by the extension.

This method applies the validation rules globally or per-project without modifying the manifest file itself. It uses the yaml.schemas setting documented in the extension's schema association guide.

  1. Open your settings (.vscode/settings.json for workspace or your user settings.json).
  2. Add the following configuration:
1
2
3
4
5
{
  "yaml.schemas": {
    "https://cdn.jsdelivr.net/npm/@forge/manifest/out/schema/manifest-schema.json": "**/manifest.yml"
  }
}

This configuration tells VS Code to download the schema from jsDelivr and apply it to any file named manifest.yml located in the root or any subdirectory of your workspace (the **/ prefix acts as a recursive wildcard).

Note: Be sure to adjust this pattern to match the actual location of your manifest.yml. Refer to the YAML extension's schema association guide for more examples.

Method 2: In-File Modeline

Alternatively, you can declare the schema inside the file itself. The YAML extension supports a specific comment format called a modeline.

Add this magic comment to the very first line of your manifest.yml:

1
2
3
4
5
# yaml-language-server: $schema=https://cdn.jsdelivr.net/npm/@forge/manifest/out/schema/manifest-schema.json

modules:
  jira:issuePanel:
    ...

This creates a direct link between this specific file and the schema version hosted on the CDN.

Tip: Version Pinning

The URLs used above point to the latest version of the schema. If you need to ensure stability or match a specific version of the CLI, you can pin the version in the jsDelivr URL. For example, to use version 12.0.0:

https://cdn.jsdelivr.net/npm/@forge/[email protected]/out/schema/manifest-schema.json

Summary

By connecting the Forge manifest schema to VS Code via jsDelivr, you transform the development experience:

  • Real-time Feedback: You no longer need to rely solely on forge lint. Syntax errors and invalid properties are underlined immediately in red.
  • Autocompletion: VS Code can now suggest valid keys and values (e.g., specific permissions or module types), dramatically reducing the time spent looking up documentation.
  • Documentation on Hover: Explanation for many manifest fields is available right under your cursor (note: not all fields are fully documented in the schema, but coverage is extensive).

This simple one-time setup removes friction from the "write-validate-fix" loop, letting you focus more on building your app features and less on debugging YAML structure.

Setting Up Identity-Only Google Accounts with Email Forwarding

Many organizations need Google accounts for authentication only - for SSO, admin access, or identity management - without giving the user a full Google Workspace license. This guide walks through setting up identity-only accounts using Cloud Identity Free and forwarding any emails sent to these accounts to an admin or a desired email address.


1. Enable Cloud Identity Free

  1. Go to the Google Admin Console → Billing → Buy or Upgrade, find Cloud Identity Free under Google Cloud Management section.
  2. Click on Explore for Cloud Identity Free and add it to your organization.

This allows creation of users without consuming paid Google Workspace licenses.


2. Disable Automatic Workspace License Assignment

  1. Navigate to Billing → Subscriptions → Google Workspace. The name of the subscription may vary based on your Google Workspace plan.
  2. Under License settings section, click Manage licensing settings.
  3. Turn off Automatic licensing.
  4. Optionally you can turn off automatic licensing only for the organization unit in which you intend to create the new identity-only users.

This ensures new users only get access to Cloud Identity without a Google Workspace license (no Gmail, Drive, or Docs).


3. Create Identity-Only Users

  1. In Admin Console → Directory → Users → Add new user.
  2. Enter the user’s name and primary email (e.g., [email protected]).
  3. Save the user.
  4. Confirm in Licenses that only Cloud Identity Free is assigned.
  5. Optionally, assign admin roles without a paid Workspace license.

4. Catch Email Sent to Identity-Only Users

Because Cloud Identity Free users do not have Gmail, emails sent to their addresses will normally bounce. You can catch them by creating a Gmail default routing rule:

  1. Admin Console → Apps → Google Workspace → Gmail → Default routing.
  2. Add a Change envelope recipient that forwards emails sent to non-existent addresses to a different email account. You can choose to replace the recipient (with a completely different email address), or just the username, or just the domain.
  3. Save the rule.
  4. Emails to the identity-only-user address will arrive in the specified mailbox.

Recommended if you have multiple identity-only users and want fine-grained control.


5. Testing

  1. Send an email to one of the identity-only addresses.
  2. Verify it is received in the desired mailbox.
  3. Log in as the identity-only user via SSO.
    • Ensure authentication works.
    • Confirm no Gmail, Drive, or Docs access.

✅ Benefits

  • Zero-cost identity accounts for authentication only.
  • Centralized email handling for identity-only users.
  • Supports SSO and admin access without paying for a Workspace license.
  • Scalable for contractors, test accounts, or temporary employees.

Understanding the Lifecycle of useRef in React and Avoiding Stale Reference Bugs

React's useRef hook is often used to persist values across renders without triggering re-renders. However, it's important to understand that useRef only survives re-renders, not re-mounts. This distinction can lead to subtle and hard-to-diagnose bugs, especially when working with closures or asynchronous logic.

Re-renders vs Remounts: Understanding the Difference

Before diving into the useRef lifecycle, it's crucial to understand the difference between re-renders and remounts in React:

Re-renders occur when React updates an existing component instance in response to state or props changes. During a re-render:

  • The component function runs again
  • New JSX is generated and compared to the previous render
  • The DOM is updated only where necessary (reconciliation)
  • Component instance and all hooks (including useRef) maintain their identity
  • Triggered by: setState, props changes from parent, context changes, or forceUpdate

Remounts occur when React completely destroys a component instance and creates a new one. During a remount:

  • The old component instance is fully discarded
  • All hooks are re-initialized from scratch
  • The entire component lifecycle starts over (mount → render → effect)
  • Any cleanup functions from the old instance are executed
  • Triggered by: key prop changes, conditional rendering (condition && <Component />), route navigation, or parent component structure changes

The key insight is that useRef survives re-renders but gets reset during remounts, which can lead to stale reference bugs when closures outlive the component instance.

useRef is tied to the component instance

When a component re-renders due to state or props changes, useRef maintains its identity. However, when the component is re-mounted - for example, due to a change in the key prop, conditional rendering, or route transitions - a new component instance is created, and the old one is discarded. The useRef is then re-initialized along with the rest of the component.

This becomes problematic when an earlier closure outlives the component itself and references the old ref object. Since that ref will never be updated again, the closure holds a stale reference. These are a few example cases where a closure might outlive the component: 1. Uncancelled timers: setTimeout or setInterval callbacks that don't have their timers cleared in cleanup functions 2. Promises: Asynchronous operations that resolve after the component unmounts 3. Recursive calls: Functions that call themselves with timeouts or in response to events or processing results

Example: Logging a stale ref due to a remount

import { useEffect, useRef } from 'react';

function MyComponent({ id }) {
  const latestIdRef = useRef(id);

  useEffect(() => {
    latestIdRef.current = id;
  }, [id]);

  useEffect(() => {
    const interval = setInterval(() => {
      console.log('Latest ID from ref:', latestIdRef.current);
    }, 60000); // 1 minute

    // Without cleanup: interval continues after unmount with stale ref
    // return () => clearInterval(interval);
  }, []);

  return <div>Component with ID: {id}</div>;
}

And rendered like this:

<MyComponent key={userId} id={userId} />

In this setup, whenever userId changes, the key change forces a full unmount and remount. The new component gets a new useRef object, but the old setInterval continues to run and holds a reference to the stale ref from the destroyed component instance. When the interval eventually fires (after 1 minute in this example), it logs the outdated value because the old ref is never updated again. As a result, the logged value is stale.

Timeline example: 1. Component mounts with userId: 1, interval starts with 1-minute delay 2. After 30 seconds, userId changes to 2, triggering unmount/remount, but the interval is not cancelled upon unmount 3. New component instance created with userId: 2 and a new latestIdRef 4. After the full minute, the old interval fires, logging the stale value 1 instead of the current 2

This problem becomes more pronounced with longer timeouts or when remounts happen frequently relative to the interval duration.

Recommendations

  • Avoid relying on useRef for values that need to persist across component unmounts and remounts.
  • Be cautious when using closures that capture useRef values, particularly in asynchronous or interval-based logic.
  • If persistence across remounts is required, consider lifting state to a shared parent component, using context, or using external state management.
  • Prefer lifecycle-safe patterns such as useEffect cleanups or observable subscriptions that are scoped to the component's active instance.

Understanding how useRef behaves with respect to the component lifecycle can help prevent subtle bugs and ensure your application logic remains predictable.

Further readings

Writing Root Cause Analysis with Confluence and Data Visualization

When conducting a post-incident Root Cause Analysis (RCA), the ability to tell a clear, data-driven story is crucial. This post explores how to leverage modern observability tools and Confluence to create comprehensive RCA reports that drive meaningful discussions and prevent future incidents.

The Power of Visual Data in RCA

During incident investigation, raw logs and metrics can be overwhelming. Tools like Datadog and Kibana transform this data into actionable insights through visualization. Here's how to effectively use them:

Datadog Integration

The Datadog Connector for Confluence app allows you to:

  • Embed real-time metrics dashboards
  • Share performance graphs directly in your RCA report on Confluence

Kibana Visualization

With Kibana Cloud Connector for Confluence, you can:

  • Embed log analysis visualizations
  • Share error rate graphs
  • Display traffic patterns during the incident

Real-World Example: Database Scaling Incident

At Wavether, we recently worked with a client and used these tools during our analysis of a database scaling incident for the client:

Incident: The client's product database experienced significant performance degradation during peak traffic.

Visualization approach:

  • Used Datadog to correlate user traffic spikes with database connection saturation
  • Embedded Kibana log analysis showing specific query patterns that triggered the issue
  • Created a timeline visualization mapping customer reports against backend metrics

Outcome: The visualizations clearly showed that the connection pooling settings were inadequate for the new traffic patterns, leading to a configuration update that prevented future incidents.

Best Practices for RCA Documentation

  1. Timeline Visualization

    • Create a timeline using Datadog's Events feature
    • Mark key events and correlate with metrics
  2. System Impact Analysis

    • Embed Datadog dashboards showing:
      • Error rates
      • Latency spikes
      • Resource utilization
  3. Log Analysis

    • Use Kibana visualizations to show:
      • Error patterns
      • Affected services
      • User impact
  4. Root Cause Confirmation

    • Correlate multiple data sources
    • Support findings with embedded graphs
    • Link to relevant monitoring dashboards

Adapting RCAs for Different Audiences

When creating data visualizations for RCAs, consider tailoring them based on the audience:

  • For Technical Teams: Include detailed metrics, code snippets, and specific technical indicators
  • For Product Management: Focus on user impact metrics and feature correlation
  • For Executive Stakeholders: Emphasize business impact, recovery time, and prevention strategies

Common Pitfalls to Avoid

  • Visualization overload: Too many graphs can obscure the story
  • Inconsistent time ranges: Ensure all visualizations use the same time boundaries
  • Missing context: Always annotate unusual patterns or key events
  • Correlation confusion: Remember that correlation doesn't imply causation

Tips for Maximum Impact

  • Keep dashboards focused and relevant
  • Annotate graphs to highlight key points
  • Use consistent time ranges across visualizations
  • Include links to live dashboards for further investigation

Getting Started

Try these tools today for free:

  1. Datadog Connector for Confluence
  2. Kibana Cloud Connector for Confluence

By combining these powerful visualization tools with Confluence's collaborative features, you can create clear, data-driven RCA documents that help prevent future incidents and improve system reliability.

Conclusion

Modern observability tools have transformed how we approach RCA. By leveraging Datadog and Kibana's visualization capabilities within Confluence, teams can create more effective, data-driven reports that lead to actionable insights and improved system reliability.