Git LFS now available in Bitbucket Pipelines

By on March 20, 2018

Bitbucket Pipelines now has built-in Git LFS support, allowing you to seamlessly build, test and deploy LFS files in your builds!

To enable it, just add lfs: true in the clone section of your bitbucket-pipelines.yml. If you don’t enable this feature, your clone will continue to behave as before.

  lfs: true
    # ... rest of Pipelines configuration ...

How does it work?

Behind the scenes, Pipelines now always uses an LFS-enabled Git client when it clones your repository, allowing LFS files to be downloaded as part of your Pipelines build. However, we’ve decided to maintain the existing behavior of not cloning LFS files by default, which can be slow to download and not needed for many types of builds. So in the case where LFS is not specifically enabled, Pipelines passes a parameter to the clone command to prevent it pulling LFS files.

The LFS clone configuration applies to all pipelines and steps in your build configuration. If you want to pull LFS files only in specific pipelines or steps, you’ll need to use the same workaround as before: include an appropriate Git client in your build image, and pull with SSH authentication to retrieve the files.

When you try it out, you might be wondering why you don’t see git lfs commands in the Build setup of Pipelines. While researching this feature, we found that git clone is now as performant as the deprecated git lfs clone. This wrapper command previously performed a standard clone and git lfs pull but is no longer recommended.

This should be a great improvement for all our Git LFS and Pipelines users. Please let us know what you think!

Support for large builds in Bitbucket Pipelines

By on February 20, 2018

There are software teams out there that want to take advantage of Bitbucket Pipelines, but haven’t been able to trim down their software projects to fit; we have some good news.

Support for large builds

Large builds now allow software teams to take advantage of twice the resources to:

To take advantage of double the resources set the size of pipeline or step to 2x.

Tell me how to get it

  size: 2x  # all steps in this repo get 8GB memory
    - step:
        size: 2x  # or configure at step level
          - ...

What’s the cost?

Steps that use double resources will consume twice the number of build minutes from your monthly allocation, effectively costing you twice as much. So we recommend configuring large builds only where you need additional memory or speed.

Our early stats show that large builds have decreased the average build time for customers that use them, due to decreased CPU contention on our hosts, but your results will depend on the specific tasks your build performs.

We’re sure the optional extra memory and CPU allocation will prove extremely useful to the growing number of professional teams that are relying on Bitbucket Pipelines for all their CI/CD needs.

Tell us what you think: @Bitbucket.

Bitbucket Cloud gears up for an amazing 2018

By on January 25, 2018

2017 was a momentous year for Bitbucket Cloud users. 17 million pull requests merged, 6 million repositories created, and over 10 million Pipelines builds run – it’s clear that individuals and teams alike got stuff done last year! We added a ton of improvements and functionality you requested, and loads of you have already started using features like Pipelines and code aware search too. Here’s a look at 2017 in numbers:

With 2017 in the rearview mirror the Bitbucket Cloud team has its sights set on an even more amazing 2018. While there’ll be some exciting surprises along the way, our aim – as always – is to help teams and individuals alike build better software, faster. Here’s a look at some of the areas we focused on last year that we’ll continue to improve on over the next 12 months.

Greater context and collaboration

Integration with the key tools in your development workflow will be a focus as Bitbucket Cloud becomes the all-in-one place for teams to get started on their projects. We understand that you want the right context on your work at the right time as well, and we’ll continue to make this as easy as possible for individuals and teams in 2018.

Update Jira issues within Bitbucket’s UI

Many Bitbucket Cloud teams use Jira Software or Trello to plan, track, and manage their work. We addressed many of the issues with context switching last year and brought more of your planning into the product, and we’ll continue to do so over the next 12 months.

For Jira Software we leveled up the existing integration by letting you update Jira issues within Bitbucket’s UI. Transitioning, commenting, the works – it’s now possible to do all of that (and more) right inside of Bitbucket.

Embedded Trello boards in Bitbucket

And for teams that use a simple visual project management tool like Trello, we wanted to bring all of your planning and tracking into one place and further simplify your development process. So we brought Trello boards inside of Bitbucket Cloud, turning Bitbucket into a tool that wasn’t just a single source of truth for your code, but a central place for you to collaborate with your entire team too.

In 2018 we aim to build on these integrations and look to further improve how you collaborate with your team and around your code – look out for deep integrations that offer chat services, security, monitoring, and more. And we’ve listened to all your valuable feedback and are looking to revamp the code review process, arguably an integral part of your experience in Bitbucket Cloud today.

Continuous integration and delivery

CI/CD continues to play a crucial role in the software development process and an important part of the Bitbucket experience. Since its inception a little over a year ago Bitbucket Pipelines has grown into a vital part of your development workflow. And with over 10.7 million builds run along with many of your requests and suggestions, 2017 was a year of progress for the team as we shipped 6 of your top 10 most requested improvements.

Bitbucket Deployments

2018 will be no different, and you can look forward to more improvements for Pipelines and Bitbucket Deployments. Tight integration with Jira Software, tracking microservice deployments across your environments, improved monitoring and insights into delivery speed – these are a sample of features to look forward to as we make Bitbucket Cloud your single source of truth to manage and track your code from development through code review, build, test, and deployment – all the way to production.

Want to know more about our plans for Bitbucket Deployments this year? Read this blog to find out more!

Code aware search

You asked and we delivered in 2017 – we were excited to announce the introduction of code aware search in Bitbucket Cloud, the top voted feature request with over 2600 votes! We went one step further than simply indexing your code though, implementing semantic search that prioritizes definitions matching your search term over usages and variable names, resulting in higher quality search results.

Code aware search

Search will continue to play an important part of the Bitbucket Cloud experience, and we’ll continue to improve the user experience and search result quality based on your feedback.

Looking ahead

The above is just a small taste of the new features at your fingertips, and what you can look forward to in 2018. We can’t wait to show how we’ll be improving how you deliver and collaborate around your code over the next 12 months.

Here’s to a great 2018!

Try Bitbucket free


Bitbucket Deployments and the future of continuous delivery

By on January 24, 2018

What you’ve seen over the past few weeks with Deployments and Pipelines in Bitbucket Cloud is just the beginning of our mission to help every software team adopt continuous delivery – and we’ve got some exciting ideas in store for how to improve the product. If you have feedback about how to improve Bitbucket Pipelines and Deployments, please share with us here. We’d love to hear from you. 

Jira integration for shared visibility

It isn’t enough to just know which commits are in each deployment, teams want a higher level overview and deployment status information directly in their tracking tool. Bitbucket Deployments will appear in your Jira issues, and also support workflow automation so you can progress issues automatically as deployments go out.

Jira Pipelines integration

Insights into delivery speed

Teams want to know whether their investment in continuous delivery is paying off. Soon, you’ll be able to see how often code is deployed to each environment. We’ll use the historical deployment data tracked in Bitbucket to provide you with figures and charts so your team can set goals and track their release cadence.

Microservices deployment tracking

For teams that run multiple services, you will be able can track deployments across all the repositories and services that make up your application, see which version is deployed to each environment, and get delivery speed insights across those services too.

Microservices tracking

Monitoring integrations

As your team is deploying to production, imagine seeing both the deployment logs and system metrics live and side-by-side as your deployment goes out. We already have great integrations with vendors like Raygun and Rollbar for production monitoring, but want to take these to the next level for teams doing frequent deployments.

Monitoring integrations

Ready to try Bitbucket Deployments?

Learn more and let us know your thoughts.

Learn more

If you missed our original blog announcements for Bitbucket Deployments, you can find them here:

Follow us on Twitter to be notified of the next updates from Bitbucket. Thanks!

Pipelines stocking stuffers: test reporting, Docker run and large builds

By on December 20, 2017

We’ve recently added a few tasty new features to Bitbucket Pipelines that we wanted to share with you before the holiday break. Here’s what Bitbucket Santa has put under the tree for our good Pipelines customers this December. (Naughty customers should close their browser window now!)

Test reporting with zero configuration

We’ve added test reporting to Bitbucket Pipelines, a highly requested feature from our customers. If you’re already generating test reports as part of their build process, you should already be seeing them picked up by Pipelines. They’ll appear in your build results and soon in notifications too.

Zero-config test reporting in Bitbucket Pipelines

As well as making your builds blazingly fast, our goal with Pipelines is to take the pain out of configuring CI/CD builds. To that end, we are always looking for ways to reduce configuration options that bloat and complicate things.

All the other build tools we’ve seen require you to configure exactly where you build stores test reports. Whenever you add a new module to your project, you might need to go back and update this configuration, or remember to set it when you configure a new project.

Unlike these other tools, test reporting in Pipelines requires zero configuration. We automatically scan your build directory up to 3 levels deep for directories matching “test-reports”, “test-results” or similar, then parse the JUnit-style XML files we find there.

This common format is supported and automatically generated by many tools (we’ve seen almost 10% of Pipelines builds pick up test reports automatically), and for tools that don’t automatically generate reports, we have instructions in our documentation.

A few improvements to test reporting are still in progress, like showing steps with successful tests, but we’re happy to make our first iteration available to you now.

Run Docker containers and docker-compose files

We’re also excited to announce that Pipelines now offers complete hosted Docker support, allowing you to build, run and test your Docker-based services in any configuration that doesn’t require privileged mode on the host. This includes using docker-compose to start a set of microservices up for testing on Pipelines.

Back in April 2017, we introduced the ability to build, tag and push Docker images from Pipelines up to your preferred registry. However, due to the security model of the Docker daemons in our shared infrastructure, we couldn’t offer the ability to run Docker containers and related tasks.

Thanks to some innovative work by our engineering team, we’ve extended our Docker authentication plugin to support all Docker commands and still prevent privileged commands being run.

Because we now allow starting arbitrary containers via docker run, we also now enforce a 1GB memory limit on Docker usage on Pipelines that was previously unlimited. You can read more about this change and how it might affect you in our infrastructure changes documentation.

Large builds: double resources for double cost

Ever since we launched Pipelines, we’ve had demand to support ever larger builds. We started offering 2GB of memory by default, and increased that to 4GB shortly after launch. Allowing builds with even more memory is a common request.

But one does not simply increase the resources allocated to customers on a hosted build service. Our default allocation factors in our hosting costs, typical customer needs, and maintaining a competitive price. To offer configuration options for larger builds, our scheduling and auto-scaling systems have to handle builds of varying sizes while continuing to keep queue times to an absolute minimum. Fortunately, the latter is what we’ve been able to achieve.

I’m pleased to announce that Pipelines now offers 8GB of memory, and similarly doubled resources (CPU, network, etc), as a new option for customers. Simply configure size: 2x in your Pipelines YAML file, either globally or on a specific step, to take advantage of this.

Steps that use double resources will consume twice the number of build minutes from your monthly allocation, effectively costing you twice as much. So we recommend configuring large builds only where you need additional memory or speed.

  size: 2x  # all steps in this repo get 8GB memory
    - step:
        size: 2x  # or configure at step level
          - ...

Our early stats show that large builds have decreased the average build time for customers that use them, due to decreased CPU contention on our hosts, but your results will depend on the specific tasks your build performs.

We’re sure the optional extra memory and CPU allocation will prove extremely useful to the growing number of professional teams that are relying on Bitbucket Pipelines for all their CI/CD needs.

Happy holidays!

We hope you enjoy these great new additions to Bitbucket, and thanks to the thousands of new customers that joined us this year.
Please let us know how Bitbucket and/or Pipelines is helping you and your team to build great software. We’re always excited to hear your stories.

Bitbucket Deployments: flexibility meets CD best practices

By on December 13, 2017

We believe development tools are more powerful when they provide flexibility but also steer teams towards the practices that help them succeed. In the design of Bitbucket Deployments in Bitbucket Cloud, we’re using our experience working with thousands of customers to recommend and reinforce the practices we see working for teams in the trenches.

Designed to fit your workflow

Our first goal was to support a wide variety of different deployment workflows, and provide visibility and confidence across all of them:

Regardless of how your team does deployments, we’ve designed Bitbucket Deployments to support your tracking model.

Multi-cloud support: AWS, GCP, Azure, and more

Support for multiple cloud vendors is also part of the Bitbucket ethos, and that includes multi-cloud support with our deployments features.

Bitbucket has deployment integrations with all the major cloud hosting platforms, whether that’s Amazon AWSMicrosoft AzureGoogle Cloud Platform or a Kubernetes cluster. By following these easy guides, you can start deploying today with Bitbucket Pipelines and tracking it through Deployments.

Built-in best practices for CD teams

While we provide a lot of flexibility for teams using Bitbucket Deployments, we also want to have some guardrails that guide teams towards best practices in continuous delivery.

First, we’ve found the “deployment pipeline” approach of deploying the same code (and ideally same build artifacts) to each environment in order dramatically improves the speed and confidence of teams in their release process. Bitbucket Deployments is designed around this idea of helping you progress quickly through each step in your defined release process, rather than jumping around haphazardly.

Second, the use of a “staging” environment is a key part of how many of the best teams work. It is used to validate once-off release changes like database schema changes or data migrations against replicated production data, or as a checkpoint for verification of single service changes in a multi-service environment. We want to encourage teams using push-button deployments to adopt a process that includes a staging environment.

Lastly, we’ve pre-configured the environment names in a way that we believe works for the vast majority of teams, who have test (or UAT/QA), staging (or pre-prod) and production (or live) environments. We want to encourage teams to use one, two or three environments in this pattern. Running different code in more than three shared environments tends be an anti-pattern, leading to developer confusion and an unclear release pipeline from code to production.

You can read more about our approach and best practice recommendations to deployments in our Bitbucket Deployments Recommendations.

Up next in the deployments journey…

If you’ve been following along with the blogs or on Twitter, you’d know that we’ve introduced the new Bitbucket Deployments, dug into some of the features, and now explained how we combine flexibility and best practices.

Next time we’ll look at some of the customers who have adopted Bitbucket Deployments and see what benefits they’re getting from the features already. Follow us on Twitter to be notified when the next post is out.

Confidence to release early and often: Introducing Bitbucket Deployments

By on December 5, 2017

Teams are deploying code faster than ever, thanks to continuous delivery practices and tools like Bitbucket Pipelines. But this has caused a huge problem: it’s hard keeping up with all the deployments and knowing where things are at.

In teams adopting continuous delivery, you hear questions like:

To help you answer these questions and more, we’re excited to announce Bitbucket Deployments in Bitbucket Cloud. Bitbucket Deployments is the first deployment solution that sits next to your source code and can be configured with a single line of code.

Now there’s no need to set up and maintain a separate deployment tool, or scroll through unrelated builds in your CI service to analyze deployments. Bitbucket can manage and track your code from development through code review, build, test, and deployment – all the way to production. (In the future, we’ll be adding integrations with Jira to keep your boards and issues in sync with deployments as well!)

Let’s jump into the specific features to show you how tracking deployments with Bitbucket can help your team move faster today.

Deployment visibility with the new dashboard

Our new Deployments dashboard gives you a single place to see which version of your software is running in each environment, and a complete history of earlier deployments.

Bitbucket Deployments dashboard

Environments in Bitbucket are configured out of the box as test, staging, and production, and your team can choose to use one or more of these environments as needed. The current status shown on the dashboard reflects the last deployment that was attempted to that environment, as configured via the Pipelines YAML file.

Also shown on the dashboard is a full deployment history with a list of every deployment to each environment. You can see which build went out, who deployed it, and when it was deployed. When diagnosing problems, the history list can be filtered to show all the deployments to one environment to trace back and find the offending change.

Tying code and deployments together in the deployment summary

Bitbucket Cloud is now one tool to manage your source code and your deployments, so it’s smarter than the average deployment tool. You don’t have to wonder which code changes went out in a deployment, Bitbucket can tell you!

Here’s what you’ll see if you click on one of the deployments on the dashboard:

Bitbucket Deployments Summary

With this complete and detailed history of every deployment, investigating problems becomes much easier. Your team can quickly confirm the cause of a bug and roll forward with the fix.

Preview and promote deployments between environments

Preventing mistakes is a key part of every team’s deployment process. This is why so many teams still have manual checkpoints in an otherwise automated process. However, these manual checkpoints are more difficult than they need to be, with lead developers trawling through dozens of diffs and PRs to review a set of changes prior to pushing them live.

To help make this easier, Bitbucket Deployments will soon have a built-in promotion workflow, letting you take a verified build that is running in one environment and promote it to the next:

Bitbucket Deployment promotions

We’re also improving the chat notifications for Stride, HipChat, and Slack to keep your entire team in the loop about the releases once they go out. You’ll see Bitbucket Deployments round out with these improvements in the coming weeks.

Get started by enabling deployments in your build

The best bit? Once you’re signed up for Pipelines early access, it’s just a single line of code to enable deployment tracking in your Pipelines YAML configuration.

Check out the example below, where tracking is enabled for test, staging and production deployments:

Bitbucket Deployments YAML example

Please give it a try and let us know your thoughts.


Try Bitbucket Deployments


Stay tuned for more on deployments…

Today we started with a quick introduction to the new Bitbucket Deployments. In the coming weeks, we’ll have a bunch more to share, covering CD best practices and the future of Bitbucket Deployments. Stay tuned!

Excited about this news? Tell us why on Hacker News or Reddit


Bitbucket Pipelines Team

Better repository search and fork discovery come to Bitbucket Server 5.6

By on November 30, 2017

How long has your code base been around? Jira Software is our oldest project at Atlassian clocking in at 15 years old. That’s a lot of code, and more importantly a lot of repositories with the word “Jira” in the name! Our switch to Git, where it’s common to create more repositories with less contents in each, didn’t help the matter.

If you can relate to this phenomenon, then you can probably relate to the struggle of searching for a specific repository or fork in Bitbucket Server today. New in Bitbucket Server 5.6 we’re taking steps to make the data discovery process much easier through improved repository search and fork discovery.

Improved Repository Search

Depending on the number of repositories you have in Bitbucket, finding the specific one you need can be a bit tricky. If you’ve never visited that repo before, then searching for its name may only show a couple of possible results even if there’s hundreds of repos that match your criteria. Sifting through irrelevant lists isn’t exactly what you hoped to be doing with your day, which is why we’re introducing a better way to find repositories in Bitbucket Server 5.6.

Search results are now easier to navigate, revealing more matching repositories in the quick search (located in the header) as well as on the search results page. The ability to filter repositories has also been added to the repository list page.

Repository Search in Bitbucket Server 5.6

Repository Fork Discovery

If your team uses a fork based workflow, you may commonly ask the questions “What repository is this forked from?” or “What forks exist of this repository?”. Until today, it was possible to get an answer only to the first question, leaving you guessing as to how to answer the second.

In Bitbucket Server 5.6 we’re bringing that information to the forefront, by adding a new menu item for “Forks” for each repository. Viewing all the forks in a single place makes it much easier to grok who has been doing what and when.

Fork discovery in Bitbucket Server 5.6

Try Bitbucket Server 5.6

If you’re currently fielding lots of repositories or forks, then Bitbucket Server 5.6 can help. Give it a try today and see how easier data discovery can improve your 9-to-5.

Download Bitbucket Server 5.6

For a full list of updates to Bitbucket Server 5.6, check out our release notes.

New outbound IP addresses for webhooks

By on November 21, 2017

Bitbucket webhooks are used by teams every day to test, analyze, deploy, and distribute great software to millions of people.

In a few weeks, we will be making a change to our network configuration that results in these services routing through different IP addresses. We plan to make this change no earlier than Monday December 11th.

If you’re using webhooks and have firewall rules to allow Bitbucket to communicate with your servers, you’ll need to add the new IPs to your rules before Monday December 11th.

The current source IP address range is (the IP range that is changing has been bolded for emphasis):

The new source IP addresses will be:

As always, you can consult our documentation for the current list of supported IPs for webhooks.

Pipelines manual steps for confidence in your deployment pipeline

By on

Bitbucket Pipelines gives you the ability to build, test and deploy from directly within Bitbucket Cloud. Today, we’re excited to announce that you can now use manual steps in Bitbucket Pipelines. With manual steps, you can customize your CI/CD pipeline by configuring steps that will only be run when manually triggered by someone on your team.

This is a great addition for teams that require some manual process (e.g. manual testing) as a prerequisite to deploying their software. You can simply configure the deployment steps as manual steps, then trigger the deployment once the necessary testing or other activities have been done.

Manual steps can also be used as an optional final step for additional automated testing. In cases where certain types of automated tests are expensive or time-consuming to run, adding them as a final manual stage gives your team the discretion in when to run these tests.

To configure a step in your pipeline as manual, add trigger: manual to the step in your bitbucket-pipelines.yml file. The pipeline will pause when it reaches the step until it is manually triggered to run through the Pipelines web interface.

Below is an example bitbucket-pipelines.yml file which uses manual steps for the deployment steps.

image: python:3.6.3
    - step:
        name: Build and push to S3
          - apt-get update
          - apt-get install -y python-dev
          - curl -O
          - python
          - pip install awscli
          - aws deploy push --application-name $APPLICATION_NAME --s3-location s3://$S3_BUCKET/test_app_$BITBUCKET_BUILD_NUMBER --ignore-hidden-files
    - step:
        name: Deploy to test
          - python
    - step:
        name: Deploy to staging
        trigger: manual
          - python
    - step:
        name: Deploy to production
        trigger: manual
          - python

Manual steps are designed as a replacement for using custom pipelines to trigger deployments, and we’ll be adding future enhancements to support tracking these deployments in the coming months. Custom pipelines remain primarily as the method of configuring scheduled builds, and also for running builds for a specific commit via the Bitbucket API.

For more information on manual steps and how to configure them, check out our documentation. For help configuring Pipelines to perform deployments, check out our deployment guides for your preferred platform.

We hope you like this addition to Pipelines. Happy building!


Learn more about Pipelines