Bitbucket Data Center is now available for growing teams

By on August 2, 2016

Whether you’re burning the midnight oil at a startup or starting a new project within a larger organization, we want to be part of your Git journey. And in order to do that, we need to be there from the start. So Bitbucket Data Center now offers source code collaboration for professional teams, across any distance, whether your team is small or large.

Keep reading to learn more about what Bitbucket Data Center can offer any team, and how you can grow with Atlassian products.

Bitbucket Data Center is a next generation Git solution

Image 1_next-gen-git-1200pxWhen tools are mission critical to your work, you can’t afford any downtime. It doesn’t matter if your team is made up of 25 or 2,500 people; what matters is that your team can do what needs to be done without worrying about outages. This makes a deployment option like Bitbucket Data Center the solution for teams of all sizes and working together across the globe.

Fo example, if your Git solution gets hit with multiple users and clients (think CI/CD) then you need a highly available tool. Bitbucket Data Center is the only Git solution with out-of-the-box active-active clustering, which provides uninterrupted access to your source code even during an unexpected outage with zero downtime.

For distributed and/or large teams, Bitbucket Data Center provides features like smart mirroring and Git LFS (Large File Storage) to help with performance. Let’s look at a team at Atlassian that fits this bill. The JIRA Software team has teams located in Sydney, Australia and Gdansk, Poland. In a typical workday it’s common for a developer in Gdansk to need access to a repository hosted in Sydney. Instead of having to wait hours to clone the repository, the developer in Gdansk can clone and create a branch from the mirror that we have set up in Gdansk within just a few minutes because of our Git mirroring. By speeding up clone times, our teams aren’t worried about lost development time and can work more efficiently across countries and continents.

Git LFS, on the other hand, is meant for teams that work with large media assets like images and videos. At Atlassian Git LFS helps our designers, tech writers, and build engineers to be more productive when working with developers, because assets can be versioned in Bitbucket Data Center. This means that developers don’t need to shoulder tap or leave their code when working with assets like videos or audio files, because they have the information they need to make exceptional UI experiences at scale. As software development moves more and more into the direction of distributed agile development and even into the land of apps that require large files and slick designs, it’s a necessity for tooling to support various ways of working.

Bitbucket Data Center offers the only solution with default reviewers

Git innovationReliability is not the only reason why teams can grow with Bitbucket Data Center. It’s a combination of using highly available tools and features that help teams move fast. A small team might be tired of hearing “We need to move faster,” and a mid-size team might feel the pains of “Our team is growing and we need a more sophisticated way to manage our source code.” Wherever you fall on this spectrum of size, everyone should have the same access to features that help them get what needs to be done, done.

Pull requests are a great example of a feature in Bitbucket Data Center that help teams innovate, because they’re set up to yield quick turnaround times for code reviews and help teams ship faster and deliver value quickly to customers. Pull requests are also set up to include default reviewers and commit-level review. The commit-level review, for example, shows comments within a pull request, so reviewers can see what changes are being made through a code review.

But pull requests are not the only feature built around teams and collaboration. At Atlassian, smart commits give developers the ability to transition JIRA Software issues by embedding specific commands into a commit message from the source code. Developers also use code search to search code to find exactly what they’re looking for, right from the search bar. Pull requests, smart commits, and code search are especially important as your instance, project, and teams grow to keep organized and moving quickly.

Bitbucket Data Center integrates and can be built upon

Besides high availability and features that help growing teams collaborate, Bitbucket Data Center can act as a development platform by setting up integrations. If your team uses existing Atlassian tools, like JIRA Software and Bamboo, you can set up Bitbucket Data Center with these tools for full traceability and continuous integration or continuous delivery.

The Atlassian marketplace, offers hundreds of add-ons to meet your team’s needs outside of integrations with Atlassian tools. Integrations are important to any team and a Git solution needs to support end users, Applinks with other Atlassian tools, API calls, and/or WebHooks without slowing down response times. So the sheer number of end users becomes only one part of a larger story of your greater ALM solution (Application Lifecycle Management) no matter how small or large your business is.

By adopting Bitbucket Data Center as a small team, from day one, you are working with a solution designed for professional teams and rooted in team collaboration. This tool is also now meant to grow with your business as your team grows, wherever you are in the world.

Try Bitbucket Data Center

Already have Bitbucket Data Center? Update to Bitbucket Data Center 4.8


As excited about the new developments in Bitbucket Data Center as we are? Share this post on your social network of choice so others can learn what’s new, too!

New personalized dashboard in Bitbucket Cloud

By on July 28, 2016

Developers use pull requests to review code and ship quality software at speed. They’re core to the software development process of many teams. Getting to that relevant pull request (aka PR) is important to get your work done in time. But finding your PRs in Bitbucket Cloud hasn’t been easy. Currently, you need 5 clicks (yes, we counted) before you discover that PR you’re looking for. That’s unacceptable!

What if PRs were front and center on your dashboard in Bitbucket Cloud? Today, we’re launching a new personalized dashboard that will list all the PRs that you’ve been asked to review and the PRs you’ve created, across all repos, in one single place. So, that PR you’re looking for is just a click away.

PR-focused dashboard

In this new personalized dashboard:

We know staying on top of code review discussions is key, so we added a blue dot on the PR comments to indicate when there are new comments since the last time you looked.

If you’re not using PRs, don’t worry – the personalized dashboard will only list your recently updated repositories. And don’t forget that the full list of your repositories is just a tab away.

Dashboard with no PRs

We’ve been dogfooding the new personalized dashboard internally at Atlassian for the last month, and in that time, we’ve noticed our software teams clean up old pull requests faster and reduce code review time significantly! No more bookmarking the PR list page of individual repos and no more wasted time to find that damn PR.

We’re gradually rolling this out to all our Bitbucket users, but if you cannot wait and want to turn it on today, just head over to the Labs menu in Settings.

New to Bitbucket Cloud? No problem! Sign-up here and enjoy the personalized dashboard in Bitbucket Cloud.

Sign up for Bitbucket

Bitbucket Pipelines Beta, now with Mercurial support

By on July 26, 2016

pipelinesmercury-blog-600x-retina2x

Git and Mercurial are the two popular distributed version control systems in use today. There are many reasons for choosing one over the other, but they each do their job very well, and we’re thrilled to support both in Bitbucket Cloud. Bitbucket Cloud is the only, and the largest, hosting service that supports Mercurial. We’re very excited to announce today that Bitbucket Pipelines, the continuous delivery feature within Bitbucket, supports Mercurial now.

Bitbucket Pipelines lets your team build, test, and deploy from Bitbucket. It’s built right within Bitbucket Cloud, giving you end-to-end visibility from coding to deployment. We initially launched Bitbucket Pipelines Beta with Git support only, but thanks to the enthusiastic feedback from our Mercurial users interested in continuous delivery, we decided to implement support for Mercurial as well.

We’re excited that both our Git and Mercurial users are able to run Bitbucket Pipelines today. Sign up and get early access to the beta if you haven’t done so already!

Not a Bitbucket Cloud user? Start here by creating an account.

Sign up for the beta program

Bitbucket Cloud: now available over IPv6

By on July 21, 2016

Just after 18:00 UTC 19 July 2016, we published our first set of AAAA records in DNS, making bitbucket.org and its associated hostnames available to the world over IPv6. We’re taking a dual-stack approach here, so IPv4 addresses will continue to work for the foreseeable future – but any IPv6-only systems you manage will now be able to access Bitbucket APIs and repos, and any systems that work with both IPv4 and IPv6 will have additional routing options which may improve network performance. This also makes it easier for us to handle new networks and clients, especially as new IPv6-only systems come online.

IPv6 traffic picked up as soon as DNS servers started seeing AAAA records IPv6 traffic picked up as soon as DNS servers started seeing AAAA records.

 

Most people will not have to do anything different to use our IPv6 addresses: in fact, if your local network and ISP both support IPv6, then you may already be using IPv6 to reach Bitbucket. However, some people may need to update their firewall configurations to permit the following destination IPv6 addresses for bitbucket.org:

Firewalls should also permit the following destination IPv6 addresses for api.bitbucket.org:

These addresses are listed alongside their IPv4 equivalents in our public documentation. We’ve also added IPv6 support for altssh.bitbucket.org, 2401:1d80:1010::15f and 2401:1d80:1003::15f, in case you need those, and set up forward and reverse DNS for all of our allocated IPv6 addresses.

Our IPv6 work started as a ShipIt project by some members of our Network Engineering team, using a proof-of-concept implementation on Bitbucket in a live demo. We had to perform a number of network and infrastructure upgrades (including new IPv4 addresses) before we could start using IPv6 for real, but once those upgrades were done we moved pretty quickly through testing, preparation, and deployment.

The best part of this whole IPv6 rollout, though, is that nobody would have noticed it if we hadn’t said anything. The Happy Eyeballs algorithm has done much of the heavy lifting here, seamlessly moving millions of sessions to IPv6. Our support team has seen a couple of tickets about IPv6-specific routing difficulties, but they were able to resolve or work around the issue within minutes. (If you’re also having problems due to IPv6 routing, then please contact us at support@bitbucket.org for assistance.)

We’re very glad that this has gone so well, and that we can take the lessons we’ve learned deploying IPv6 to Bitbucket and apply them to other Atlassian products. Stay tuned for more updates on infrastructure projects!

3 new features in Bitbucket Server including commit-level review

By on July 19, 2016

Bitbucket Server 4.8 is all about faster turnaround time for pull requests and zero downtime backup. Keep reading to learn about three new features and how each one helps teams collaborate to produce higher quality code.

Break down big or long-running pull requests with commit-level review

Pull requests make collaboration easier for developers wherever and whoever you are in the code review process. Plus, they help teams follow best code review practices by including specified reviewers in the workflow. But what if you’re the reviewer and you would like to review individual commits added to a pull request quickly?

The new commit-level review provides per-commit diff views and commenting within pull requests to help reviewers do just that – review individual commits added to a pull request. For example, if a pull request author has thoughtfully broken down their work into logical commits, you can now step through the changes for each commit. And at any time the effective diff for the whole pull request is still available for the “big picture” view.

Commit-level review is also really handy when returning to a pull request that you’ve already reviewed. You can see how your feedback has been incorporated without having it mixed up with changes you’ve already seen, because commit-level review allows you to view and comment on individual changes, one at a time. Highlighting changes at this granular level makes the reviewing experience better for every member on the team and will result in faster pull request turnarounds.

commit-level-review-02

Set up default reviewers for pull requests

The focus on commit-level review in Bitbucket Server 4.8 is no coincidence, because pull requests are at the heart of workflows in Bitbucket. Along with reviewing pull requests, adding a reviewer should be easy. But not all teams have a frictionless process for assigning reviewers.

For example, do you have a release gatekeeper on your team that reviews or merges every pull request? Or are you on a small team and everyone is asked to review each pull request? What if you’d like to avoid building up a list of reviewers from scratch for every pull request? With the new default reviewer feature, on any given repo you can configure a default set of reviewers that will be pre-populated on the pull request creation form. Default reviewers can even be defined by source and target branch, and once it’s set up you can add or remove people from the list as needed.

In addition, a new merge check is available for specifying that a certain number of these default reviewers must approve before a merge can occur. By setting up these checks, you can make sure the release manager gets a heads-up on pull requests targeting release branches, the senior dev approves all changes to the default branch, and the bug-master reviews all the bug fixes. This granularity of configuration by branch is a powerful tool and is one of the many ways that the pull request experience is unique to help your team be innovative.

Image1_BB 4.8_bitbucket-add-reviewers

Back up data without downtime

Backups are business as usual for any company that relies on hosted software. An important aspect of backups is making sure that consistent copies of data are made using supported mechanisms, and that has meant some amount of regular downtime. If you have builds that run at night and the downtime happens at night, then your builds will fail. If you have teams halfway around the world, then your nighttime is their daytime, resulting in downtime during their working hours.

In Bitbucket Server and Bitbucket Data Center 4.8, it’s now possible to create a backup without locking or shutting down the instance to reduce downtime due to backups. As long as your file system and database snapshots are each consistent within themselves (various vendor-supported options exist for each) a restore where the snapshots aren’t in sync with each other will now result in a working instance that’s tolerant to minor inconsistencies where operations occurred during the backup period.

In addition, Bitbucket Data Center has an active recovery mode that can be run on start-up to find, log, and resolve any inconsistencies between filesystem and database. The addition of zero downtime backup to active-active clustering in Bitbucket Data Center is another way to ensure constant access for users.

For more details on zero downtime backup, see our updated documentation.

All of the new features released in Bitbucket Server 4.8 provides teams with Git best practices and make collaboration easier for teams. Whether you’re the reviewer of a pull request or need to assign a reviewer, the two new pull request features make code reviews easier for quick turnaround times and better quality code. The same can be said for zero downtime backup: with an active recovery mode, work is not put on hold and collaboration doesn’t need to come to a halt whether you’re dependent on clients or team members in different timezones. It’s through these features that teams can make sure that team members stay close to (working) code to produce higher code quality.

Update to Bitbucket Server 4.8

Check out the release notes for more information on these new features and the other improvements we’ve made in 4.8.


Did you find this post useful? Share it on your social network of choice so others can produce higher code quality, too!

Git Large File Storage (Git LFS) now in Bitbucket Cloud

By on July 18, 2016

In recent years software teams across all industries have adopted Git thanks to its raw speed, distributed nature and powerful workflows. Additionally, modern software teams are increasingly cross-functional and consist not only of developers but designers, QA engineers, tech writers, and more. In order to be successful these teams need to collaborate not just on raw source code but on rich media and large data.

It’s no secret though that Git doesn’t handle large files very well and quickly bloats your repositories. We are therefore excited to announce that, following Bitbucket Server’s lead earlier this year, Git LFS is now available in beta for Bitbucket Cloud to improve the handling of your large assets. So even if your files are really, really large, Bitbucket Cloud allows your team to efficiently store, version and collaborate on them.

Why should you care about Git LFS?

Git was optimized for source – it’s easily merged and compressed and is relatively small, so it is perfectly feasible to store all history everywhere, but this makes it inefficient and slow when trying to track large binary files. For example, If your designer stores a 100 MB image in your Git repository and modifies it nine times, your repository could bloat to almost 1 GB in size, since binary deltas often compress poorly. Every developer, build agent, and deploy script cloning that repository would then have to download the full 1 GB history of changes, which may lead to drastically longer clone times. Just imagine what would have happened if your designer made 99 changes to that file.

A common solution to this inherent flaw in Git is to track these large files outside of Git, in local storage systems or cloud storage providers, which leads to a whole new set of problems. Separating your large files from your repository will require your team to manually sync and communicate all changes to keep your code working.

With the addition of Git LFS support, you can say goodbye to all these problems track all your files in one place in Bitbucket Cloud. Instead of bloating your Git repository, large files are kept in parallel storage, and only lightweight references are stored making your repositories smaller and faster. The next time your team clones a repository with files stored in Git LFS, only the references and relevant large files that are part of your checked out revision will get downloaded, not the entire change history.

For those interested in a longer explanation of how Git LFS works and how to migrate your existing repository, watch this presentation by Tim Pettersen, Atlassian Developer Advocate, on Tracking huge files with Git LFS.

When is Git LFS right for you?

Generally, if you want to use Git to easily version your large files, Git LFS is the right choice. To call out just a few cases in which Git LFS will make your life easier, here’s a short list:

  • Game developers working with large textures, 3D, audio and video files
  • Mobile developers catering for higher and higher display resolutions
  • Web developers building pages with rich media
  • Software teams handling checked in dependencies
  • Researchers working with huge data sets
  • Multimedia producers and designers
  • QA engineers using database snapshots for functional tests
  • Technical evangelists who store their presentation slides in Git

Git LFS support is built right into SourceTree

If you’re like us and use SourceTree, tracking files with Git LFS is one click away. Simply right click on any file type you’d like to track and select the “Track in Git LFS” option. SourceTree will do the rest for you.

Group 3

Group 2 Copy

What you will get with the Git LFS Beta

  • Track everything in one place – Track your large assets alongside your source code and stop worrying about manually versioning them in a separate storage system.
  • Store, version and share large files – Version all your large files with Git, no matter how large they are. With Git LFS you can store huge audio files, graphics, videos or any other binaries efficiently. During the beta period, you get 1GB of storage for up to 5 users. If you are a paid team your storage is based on your user tier.
  • Free up your repository – Store more in your Git repository. Git LFS stores your large files externally and keeps your actual Git repository lightweight.
  • Clone and fetch faster – Only download what you need. Git LFS will only fetch those large files that you check out and not the entire history of changes.
  • Just do git – No need to learn anything new. Git LFS seamlessly integrates into your Git workflow and does not require you to learn a new workflow, new commands or use new tools.

Git going right away

If you’re ready to get started, sign up for a Bitbucket Cloud account. If you’re already using Bitbucket Cloud, enable it in one click on the repos you would like to try it out on by heading to the “Git LFS Beta” section on the left sidebar.

Sign up for Bitbucket

Universal 2nd Factor (U2F) now supported in Bitbucket Cloud

By on June 22, 2016

YubiKey4-YubiKey4Nano-1030x674-1-1030x674

Last week, we released support for FIDO Universal 2nd Factor in Bitbucket Cloud. FIDO U2F is an emerging standard for two-step verification that uses a physical USB key to digitally sign a challenge from a trusted website. It’s a new authentication standard designed to enable small USB tokens, mobile phones, and other devices to act as a secure second factor for 2FA without requiring any additional overhead of installing drivers or client-side software applications.

What does this mean for you?
You may have heard about some high profile breaches and subsequent unauthorized publication of stolen user credentials in the past few weeks. Two-step verification on your Bitbucket Cloud account ensures that your data will continue to be protected even if someone else gets your password.

With U2F, instead of having to enter a TOTP (Time-based One-time Password) every time you want to log in to Bitbucket Cloud, you can simply press a button on a small USB device plugged into your computer. You are also less vulnerable to phishing attacks since security keys will only sign challenges that match the proper domain for the website.

security_keys_full

Visit two-step verification settings to add your key. If you do not already have two-step verification enabled, you’ll need to enable it before you can use your U2F key with Bitbucket Cloud.

Special Yubikey promotion for Bitbucket users
You’ll need to purchase a security key that supports U2F in order to take advantage of this feature. We’re collaborating with Yubico, co-creator of the U2F protocol, and offering discounts for a limited time through a special offer: Bitbucket teams can purchase up to 10 keys at a 25% discount, (while supplies last). You can find more information about the offer here.

We are proud to be among the first few websites to support this standard. “We applaud Atlassian for their support for the FIDO U2F protocol, by introducing this forward thinking strong public key cryptography two-factor authentication option to their user base,” said Jerrod Chong, VP Solutions Engineering, Yubico. Earning and keeping your trust is part of our customer commitment. Learn more about 2FA and U2F.

Atlassian account is coming to Bitbucket Cloud

By on June 14, 2016

join-my-accounts

Forget about having to remember multiple passwords and say hello to a better Atlassian experience in the cloud. Bitbucket users are now being upgraded to Atlassian account. The integration of Atlassian account with all Atlassian Cloud services is currently in progress, and when it is complete, you will be able to log in to Bitbucket, JIRA, Confluence, and HipChat with one Atlassian account. In addition, you can now use your Atlassian account to log in to SourceTree and our support and billing systems.

The upgrade process will ask you to verify the email address that we have on file. This should be quick and painless, and then it’s back to Bitbucket business as usual!

Already have an Atlassian account? No worries, we’ll join your Bitbucket account to your existing Atlassian account. And if you want to change your email address associated with your Bitbucket account, click “Try another email” during the migration process. If you use Google authentication to log in, you can keep doing so provided it matches your Atlassian account email address.

Want to learn more about Atlassian account? Click here.

Git 2.9 is out!

By on June 13, 2016

Just a couple of months after version 2.8, Git 2.9 has shipped with a huge collection of usability improvements and new command options. Here’s the new features that we’re particularly excited about on the Bitbucket team.

git rebase can now exec without going interactive

Many developers use a rebasing workflow to keep a clean commit history. At Atlassian, we usually perform an explicit merge to get our feature branches on to master, to keep a record of where each feature was developed. However, we do avoid “spurious” merge commits by rebasing when pulling changes to our current branch from the upstream server (git pull −−rebase) and occasionally to bring our feature branches up to date with master (git rebase master). If you’re into rebasing, you’re probably aware that each time you rebase, you’re essentially rewriting history by applying each of your new commits on top of the specified base. Depending on the nature of the changes from the upstream branch, you may encounter test failures or even compilation problems for certain commits in your newly created history. If these changes cause merge conflicts, the rebase process will pause and allow you to resolve them. But changes that merge cleanly may still break compilation or tests, leaving broken commits littering your history.

However, you can instruct Git to run your project’s test suite for each rewritten commit. Prior to Git 2.9 you could do this with a combination of git rebase −−interactive and the exec command. For example:

git rebase master −−interactive −−exec=”npm test”

would generate an interactive rebase plan which invokes npm test after rewriting each commit, ensuring that your tests still pass:

pick 2fde787 ACE-1294: replaced miniamalCommit with string in test
exec npm test
pick ed93626 ACE-1294: removed pull request service from test
exec npm test
pick b02eb9a ACE-1294: moved fromHash, toHash and diffType to batch
exec npm test
pick e68f710 ACE-1294: added testing data to batch email file
exec npm test

# Rebase f32fa9d..0ddde5f onto f32fa9d (20 command(s))

In the event that a test fails, rebase will pause to let you fix the tests (and apply your changes to that commit):

291 passing
1 failing

1) Host request “after all” hook:
Uncaught Error: connect ECONNRESET 127.0.0.1:3001

npm ERR! Test failed.
Execution failed: npm test
You can fix the problem, and then run

        git rebase −−continue

This is handy, but needing to do an interactive rebase is a bit clunky. As of Git 2.9, you can perform a non-interactive rebase exec, with:

git rebase master -x “npm test”

Just replace npm test with make, rake, mvn clean install, or whatever you use to build and test your project.

git diff and git log now detect renamed files by default

Git doesn’t explicitly store the fact that files have been renamed. For example, if I renamed index.js to app.js and then ran a simple git diff, I’d get back what looks like a file deletion and addition:

diff −−git a/app.js b/app.js
new file mode 100644
index 0000000..144ec7f
−−− /dev/null
+++ b/app.js
@@ -0,0 +1 @@
+module.exports = require(‘./lib/index’);
diff –git a/index.js b/index.js
deleted file mode 100644
index 144ec7f..0000000
−−− a/index.js
+++ /dev/null
@@ -1 +0,0 @@
-module.exports = require(‘./lib/index’);

A move is technically just a move and a delete, but this isn’t the most human-friendly way to show it. Instead, you can use the -M flag to instruct Git to attempt to detect renamed files on the fly when computing a diff. For the above example, git diff -M gives us:

diff −−git a/app.js b/app.js
similarity index 100%
rename from index.js
rename to app.js

What does -M stand for? renaMes? Who cares! As of Git 2.9, the git diff and git log commands will both detect renames by default, unless you explicitly pass the −−no-renames flag.

git clone learned −−shallow-submodules

If you’re using submodules, you’re in luck – for once! 🙂 Git 2.9 introduces the −−shallow-submodules flag that allows you to grab a full clone of your repository, and then recursively shallow clone any referenced submodules to a depth of one commit. This is useful if you don’t need the full history of your project’s dependencies. For example, if you have a large monorepo with each project stored as a submodule, you may want to clone with shallow submodules initially, and then selectively deepen the few projects you want to work with. Another scenario would be configuring a CI or CD job that needs to perform a merge: Git needs the primary repository’s history in order to perform its recursive merge algorithm, and you’ll likely also need the latest commit from each of your submodules in order to actually perform the build. However you probably don’t need the full history for every submodule, so retrieving just the latest commit will save you both time and bandwidth.

Overriding .git/hooks with core.hooksPath

Git’s comprehensive hook system is a powerful way to tap into the lifecycle of various Git commands. If you haven’t played with hooks yet, take a quick look at the .git/hooks directory in any Git repository. Git automatically populates it with a set of sample hook scripts:

$ ls .git/hooks

applypatch-msg.sample pre-applypatch.sample pre-rebase.sample
commit-msg.sample pre-commit.sample prepare-commit-msg.sample
post-update.sample pre-push.sample update.sample

Git hooks are useful for all sorts of things, for example:

App passwords are here in Bitbucket Cloud

By on June 6, 2016

Keeping your code secure is crucial. That’s why last year we added support for 2-factor authentication. However, only having native 2FA support limits you from accessing Bitbucket repositories via 3rd party applications. Today, we’re excited to announce application-specific passwords (a.k.a. app passwords), which will allow you to just do that. App passwords let applications access Bitbucket’s API via HTTPS when 2-factor authentication is enabled on your Bitbucket account.

app password list retina

For example, you can use an app password in SourceTree to get full desktop access to your repositories when you have 2FA enabled. From the command line, you can make API calls with the app password instead of the account password, like:

curl --user bitbucket_user:app_password https://api.bitbucket.org/1.0/user/repositories

Granular scopes
You can set the scope of app passwords when you create them and give each application exactly the access it needs. We show you the last time each app password was used to access Bitbucket, and if you want, you can easily revoke an app password if something changes.

app password create retina

You can create and manage app passwords in the Access Management section of your account settings, or check out the docs to learn more.

Happy (and secure) coding!