Friday, June 23, 2017

mu-repo: Dealing with multiple git repositories

It's been a while since I've commented about mu-repo, so, now that 1.6.0 is available, I decided to give some more details on the latest additions ;)

-- if you're reading this and don't know what mu-repo is, it's a tool (done in Python) which helps when dealing with multiple git repositories (providing a way to call git commands on multiple repositories at once, along some other bells and whistles). has more details.

The last 2 major things that were introduced where:

1. A workflow for creating code-reviews in multiple repositories at once.

2. The possibility of executing non-git commands on multiple repositories at once.

For #1, the command mu open-url was created. Mostly, it'll compare the current branch against a different branch and open browser tabs making replacements in the url passed with the name of the repository ( has more info and examples on how to use this for common git hosting platforms).

For #2, it's possible to execute a given command in the multiple tracked repositories by using the mu sh command. Mostly, call mu sh and pass the command you want to issue in the multiple tracked repositories.

e.g.: calling mu sh python develop will call python develop on each of the tracked repository directories.

That's it... enjoy!

Thursday, June 08, 2017

PyDev 5.8.0: Code Coverage fixes, IronPython debugging

PyDev 5.8.0 is now available for download.

This release fixed some issues regarding the code coverage integration, and adds support to using the code coverage when running tests using pytest.

There were also fixes in the debugger for IronPython, which had stopped working with IronPython due to its lack of sys._current_frames (although an important note is that IronPython 2.7.6 and 2.7.7 don't work with PyDev because of a critical issue in IronPython, so, either keep to IronPython 2.7.5 or use the development version).

This is also the first release to add a way for clients to hook in the debugger, so, it's possible to customize the representation of custom classes (see for more details).

There were also fixes in the PyLint integration, updating docstrings, finding __init__ on code-completion when it's resolved to a superclass, etc... See: for more details.

Enjoy ;)

Wednesday, April 12, 2017

PyDev 5.7.0: PyLint integration and Jython debugging

PyDev 5.7.0 is now out. Among the major changes in this release is a much improved PyLint integration and a fix to the debugger which prevented it from working with Jython.

The major change on the PyLint integration is that now instead of doing it as a builder within PyDev (which would act on any file changed), PyLint uses the same structure that the PyDev code analysis uses.

This means that by default it'll run only on open files (so, it will run less frequently), while still being able to ask for a full analysis on all files below a folder.

Also, using Ctrl+1 on a line with PyLint errors will provide an option to ignore the PyLint error (in the same way it could already ignore a PyDev code analysis error) and if the same error is reported by PyDev and PyLint in the same line, only the one from PyDev will be shown.

-- has more details on the changes.

Besides this, there were improvements using the assign parameters to attributes (which will no longer add assignments already available) and when there's already a docstring available in a method, an option to update the docstring to add the missing parameters is now presented (both actions are accessible through Ctrl+1 on a function def line).

Other changes may be seen at:, where full release notes are available.

LiClipse 3.6.0 is already out with the integrated changes.

Thank you very much to all the PyDev supporters and Patrons (, who help to keep PyDev moving forward!

Thursday, March 23, 2017

PyDev 5.6.0 released: faster debugger, improved type inference for super and pytest fixtures

PyDev 5.6.0 is now already available for download (and is already bundled in LiClipse 3.5.0).

There are many improvements on this version!

The major one is that the PyDev.Debugger got some attention and should now be 60%-100% faster overall -- in all supported Python versions (and that's on top of the improvements done previously).

This improvement was a nice example of trading memory vs speed (the major change done was that the debugger now has 2 new caches, one for saving whether a frame should be skipped or not and another to save whether a given line in a traced frame should be skipped or not, which enables the debugger to make much less checks on those occasions).

Also, other fixes were done in the debugger. Namely:

  • the variables are now properly displayed when the interactive console is connected to a debug session;
  • it's possible to select the Qt version for which QThreads should be patched for the debugger to work with (in preferences > PyDev > Debug > Qt Threads);
  • fixed an issue where a native Qt signal is not callable message was raised when connecting a signal to QThread.started.
  • fixed issue displaying variable (Ctrl+Shift+D) when debugging.

Note: from this version onward, the debugger will now only support Python 2.6+ (I believe there should be very few Python 2.5 users -- Python 2.6 itself stopped being supported in 2013, so, I expect this change to affect almost no one -- if someone really needs to use an older version of Python, it's always possible to get an older version of the IDE/debugger too). Also, from now on, supported versions are actually properly tested on the ci (2.6, 2.7 and 3.5 in and 2.7, 3.5 in

The code-completion (Ctrl+Space) and find definition (F3) also had improvements and can now deal with the Python super (so, it's possible to get completions and go to the definition of a method declared in a superclass when using the super construct) and pytest fixtures (so, if you have a pytest fixture, you should now be able to have completions/go to its definition even if you don't add a docstring to the parameter saying its expected type).

Also, this release improved the support in third-party packages, so, coverage, pycodestyle (previously and autopep8 now use the latest version available. Also, PyLint was improved to use the same thread pool used in code-analysis and an issue in the Django shell was fixed when django >= 1.10.

And to finish, the preferences for running unit-tests can now be saved to the project or user settings (i.e.: preferences > PyDev > PyUnit > Save to ...) and an issue was fixed when coloring the matrix multiplication operator (which was wrongly recognized as a decorator).

Thank you very much to all the PyDev supporters and Patrons (, who help to keep PyDev moving forward and to JetBrains, which sponsored many of the improvements done in the PyDev.Debugger.

Tuesday, January 31, 2017

PyDev 5.5 released

The main features introduced in PyDev 5.5 are:

  • Ctrl+Shift+Alt+O allows jumping directly to the last hyperlink in the console (which means that when you have some exception on the console, it can be used to got directly to the error location without using a mouse).
  • Ctrl+2, sw switches the target and value in an assignment (but may not work properly if more than one '=' char is found in the line).
  • The code-completion which adds a local import can now be configured to add the local import to the top of the method, not only in the line above the current line (to use it, request a code-completion for some token which needs to be imported, then press tab to focus the completion pop-up and apply the completion with Shift pressed).
  • Another improvement in code-completion is that it now properly supports method chaining and provides 'value' and 'name' fields when accessing enums.

Apart from those, multiple bug-fixes are also available (in refactoring, hovering on debug, parsing nested async calls on Py3 and grouping imports).


p.s.: Thank you very much to all the PyDev supporters and Patrons (, who help to keep PyDev moving forward.

For LiClipse users, 3.4.0 is already available with the latest PyDev.

Wednesday, November 30, 2016

PyDev 5.4.0 (Python 3.6, Patreon crowdfunding)

PyDev 5.4.0 is now available.

The main new feature in this release is support for Python 3.6 -- it's still not 100% complete, but at least all the new syntax is already supported (so, for instance, syntax and code analysis is already done, even inside f-strings, although code-completion is still not available inside them).

Also, it's now possible to launch modules using the python '-m' flag (so, PyDev will resolve the module name from the file and will launch it using 'python -m'). Note that this must be enabled at 'Preferences > PyDev > Run'.

The last feature which I think is noteworthy is that the debugger can now show return values (note that this has a side-effect of making those variables live longer, so, if your program cares deeply about that, it's possible to disable it in Preferences > PyDev > Debug).

Now, for those that enjoy using PyDev, I've started a crowdfunding campaign at Patreon: so that PyDev users can help to keep it properly maintained... Really, without the PyDev supporters:, it wouldn't be possible to keep it going -- thanks a lot! Hope to see you at too ;)

Friday, November 25, 2016

Python Import the world Anti-pattern!

I've been recently beaten from that anti-pattern which seems to be way too common in Python.

Now, what exactly is that "import the world" anti-pattern?

This anti-pattern (which I just made up) is characterized by importing lots of other files in the top-level of your scope of your module or package or doing too much at import time.

Now, you may ask: why is it bad? Every code I see in the Python world seems to be structured like that...

It's pretty simple actually: in Python, everything is dynamic, so, when you import a module you're actually making the Python interpreter load that file and run the bytecode, which will in turn generate classes, methods (and do anything which is in the global scope of your module or class definition)..

-- sure, even worse would be at import time going on to connect to some database or do other nasty stuff you wouldn't be expecting by just importing a module -- or who knows, going on to register to another service just because you imported some module! Importing code should be mostly free of side effects, besides, you know, generating those classes, methods and putting the module on sys.modules.

Ok, ok, I deviated from the main topic: why is it so bad having all those imports at the top-level of your module?

The reason is simple: nobody wants a gazillion of dependencies just because they partially imported some module for some simple operation.

It's slow, so much that any command line application that passed the toy stage -- and are concerned about the user experience -- have to hack around it.. really, just ask the mercurial guys.

It wastes memory (why loading all those modules if they won't be used anyways).

It adds dependencies which wouldn't be needed in the first place (like, you have a library which needs a lapack implementation which needs some ancient incarnations to be compiled that I don't care about because I won't be using the functions that need it in the first place).

It makes testing just a part of your code much slower (i.e.: you'll load 500 modules just for a small unit-test which touches just a small portion of your code).

It makes testing with pytest-xdist much slower (because it'll import all the code in all of its slaves instead of loading just what would be needed for a given worker).

So, please, just don't.

Ok, but does that really happen in practice?

Well, let me show you the examples I have stumbled in the last few days:

1. conda: Conda is a super-nice command line application to manage virtual environments and get dependencies. But let's take a look under the hood:

The main thing you use in conda is the command line, so, let's say you want to play with the "conda_env.cli.main" module. How long and how much does a simple: "from conda_env.cli import main"?

Let's see:

>>> sys.path.append('C:\\Program Files\\Brainwy\\PyVmMonitor 1.0.1\\public_api')
>>> import pyvmmonitor
>>> pyvmmonitor.connect()
>>> len(sys.modules)
>>> 123
>>> @pyvmmonitor.profile_method
... def check():
...     from conda_env.cli import main
>>> check()
>>> import sys
>>> len(sys.modules)

And it generates the following call graph:

Now, wait, what just happened? Haven't you just imported a module? Yes... I have, and in turn it has taken 0.3 seconds, loaded its configuration file under the hood (and made up some kind of global state?), parsed yaml, and imported lots of other things in turn (which I wish never happened) -- and it'd be even worse if you did a "conda env" command because it imports lots of stuff, parses the arguments and then decides to call a new command line with subprocess with "conda-env" and goes on to do everything again (see

2. mock: Again, this is a pretty nice library (so much that it was added to the standard library on Python 3), but still, what do you expect from "import mock"?

Let's see:

>>> sys.path.append('C:\\Program Files\\Brainwy\\PyVmMonitor 1.0.1\\public_api')
>>> import pyvmmonitor
>>> pyvmmonitor.connect()
>>> len(sys.modules)
>>> 123
>>> @pyvmmonitor.profile_method
... def check():
...    import mock
>>> check()
>>> import sys
>>> len(sys.modules)

And it generates the following call graph:

Ok, now there are less deps, but the time is roughly the same. Why? Because to define its version, instead of doing:

__version__ = '2.0.0'

it did:

from pbr.version import VersionInfo

_v = VersionInfo('mock').semantic_version()
__version__ = _v.release_string()
version_info = _v.version_tuple()

And that went on to inspect lots of things system wide, including importing setuptools, which in turn parsed auxiliary files, etc... definitely not what I'd expect when importing some library which does mocks (really, setuptools is for setup time, not run time).

Now, how can this be solved?

Well, the most standard way is not putting the imports in the top-level scope. Just use local imports and try to keep your public API simple -- simple APIs are much better than complex APIs ;)

Some examples: if you have some library which wants to export some methods at some __init__ package, don't import the real implementations and put them in the namespace, just define the loads and dumps as methods in __init__ and make local imports which will do the real work loading the actual implementation as lazy imports inside those methods (or do import it, but then, make sure that the modules that contain the loads and dumps don't have global imports themselves).

Classes may be the more tricky case if you want them to be the bases for others to implement (because in this case you really need the class at the proper place for users of your API), so, here, you can explicitly import that class to be available in your __init__, but then, make sure that the scope which uses that class will import only what's needed to define that class, not to use it.

Please, don't try to use tricks such as for your own library... it just complicates the lives of everyone that wants to use it (and Python has the tools for you to work with that problem without having to resort to something which will change how Python imports behave globally) and don't try to load some global state behind the scenes (explicit is way better than implicit).

Maybe the ideal would be having Python itself do all imports lazily (but that's probably impossible right now) or have some support for a statement such as from xxx lazy_import yyy, so that you could just shove everything at your top-level, but until then, you can resort to using local imports -- note: you could still can create your own version of the lazy import which you could use to put things in the global scope, but as it's not standard, IDEs and refactoring tools may not always recognize it, so, I'd advise against it too given that local imports do work properly (although if you want to do some global registry of some kind, register as strings to be lazily loaded when needed instead of importing modules/classes to fill up your registry).

Really, this is not new: ;)