Javatpoint Logo
Javatpoint Logo

Mocking External API in Python

A fantastic method to increase the functionality of your product is to integrate it with a third-party program. We do not control the servers on which the external library is hosted, the code that makes up its logic, or the information exchanged between it and your app because you do not own the external library. In addition to these problems, users frequently alter the data due to their interactions with the library.

You can have no control over a third-party application. Quite a few of them need to provide testing servers. Live data cannot be tested; even if it could, the tests' results would be unreliable because the data would be updated as it was used. Moreover, it would be best never to link an external server to your automated testing. If releasing your code depends on whether your tests pass, a mistake on their end could end your progress. Fortunately, there is a technique to test a Third-party API's implementation in a safe setting without connecting to an external data source. The answer is to use mocks to simulate the functionality of the external programs.

Mocking External API

A mock is a made-up entity created to seem and behave like data. You exchange it with the actual object and mislead the system into thinking that the fake is real. Employing a fake makes me think of a common movie cliché where the hero takes a goon, dons his costume, and charges into a crowd of approaching foes. Everyone continues to move, and the impostor goes unnoticed-business it's as usual.

It would be best to consider imitating Third-party authentication methods like OAuth in your application. Your application must complete OAuth to access its APIs, which involves real user data and needs communication with an external server. You can test your system as an authorized user using mock authentication, which spares you from having to go through the actual exchange of credentials. In this instance, testing your system's ability to authenticate a user successfully differs from what you want to test; you want to test how your application's features operate once authorized.

Initial steps

Set up a fresh development environment to house the project code first. The following libraries should then be installed after creating a new virtual environment:

In case you are unfamiliar with any of the libraries you are installing, here is a brief description of each one:

  • The mock module validates Python programs by replacing system elements with mock objects. NOTE: If you are using Py 3.3 or higher, the mock library is a component of the unittest. Install the backport fake library if you are using an older version.
  • To facilitate testing, the nose library expands the constructed Python unit test module. Although you can get the same results with the unit test and other Third-party tools like pytest, I prefer nose's assertion methods.
  • Python HTTP calls are substantially simplified with the requests package.

During this session, you will interact with JSON Placeholder, a fictitious internet API created for testing. Before writing any tests, you need to know what to anticipate from the API.

First, assume that the Apis you aim for respond to requests you send them. By using cURL to call the endpoint, verify this assumption:

This request should return a list of to-do items in JSON format. Take close attention to how the response's todo data is organized. You should see a list of objects with the keys userId, id, title, and finished. Now that you know what to anticipate from the data, you can make your second assumption. The API endpoint is operational and active. By using the command line to call it, you demonstrated that. Write a nose test immediately to verify the server's life in the future. Ensure simplicity. The only thing that matters is whether the server responds with an OK.

Filename: project/tests/

Output: Run the test and watch it pass.

$ nosetest1 --verbosity=2 project
test_todo11.test_request_responses ..... ok
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
Ran 1 test in 9.330s

Creating a service by refactoring your code

Your application will likely make numerous calls to an external API. However, such API calls may incorporate logic beyond sending an HTTP request, such as filtering, data processing, and error handling. The code in your test should be taken out and refactored into a service function containing all the anticipated functionality.

To test the new logic and include a reference to the service method, rewrite your test.

Filename: project/tests/

Run the test to see it fail, then add the bare minimum of code to make it succeed:

Filename: project/


Your initial test called for a response with the OK status. Your programming logic was reorganized into a service function that returns the response upon a successful server request. If the request is unsuccessful, a None value is returned. The assertion that the procedure does return None is now part of the test.

Observe how I directed you to build a file, and afterwards, I supplied it with a BASE URL. As all API endpoints share the same base, you may continue to build new ones while modifying that section of code because the service function stretches the BASE URL to produce the TODOS URL. If numerous modules use that code, having the BASE URL in a separate file makes modifying it all at once easier.

Execute the test and observe it pass.


$ nosetest1 --verbosity=3 project
test_todo11.test_request_responses ..... ok
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ran 1 test in 1.475s


You're first mock

The code is functioning as intended. You are aware of this because you passed the test. Your dynamic system is still directly contacting the remote server, which is unfortunate. When you use the get todos () function, your code sends a request to the API endpoint and then returns a result that depends on the availability of that service. Here, I'll show you how to separate your software system from the going overseas library by substituting a phoney request that yields the same data for the genuine one.


You'll see that I made no modifications to the service function. The test was the only area of the code that I changed. First, I exported the patch() function from the mock package. Finally, I added a connection to as a decorator to the patch() function, which I used to modify the test function. I included an instruction to set the mock get.return value in the test function's body after passing a parameter named mock get into the function. OK = True.

Great. What occurs following the test is run, then? It would help if you had a basic understanding of how the requests library operates before I continue. The requests.get() function secretly sends an HTTP request when invoked is returned a Response object, which is an HTTP response. Targeting the get() function would be best because it interacts with the external server directly. Do you recall the scene where the hero changed into the adversary while still sporting his uniform? You must dress the fake so that it appears and behaves as requested. the get() method.

When the test method is called, it locates the module in which the requests library has been declared and substitutes a mock for the desired function, requests.get(). The test instructs the fake to respond following how the service function anticipates it will behave. You can see from get todos () that the function's success depends on the user's response. returning True is OK. The statement mock get.return value. OK = True accomplishes that. The mock object will return True when the acceptable property is called, just like the real object. The test will pass since the mock is None when the get todos () function returns the answer, which is mock.

Test it out to see if it passes.

Other ways to patch

One method to patch a procedure with a mock is to use a decorator. The next example uses a context manager to patch a procedure within a block of code explicitly. Any code inside the code block that uses a function will be patched by the with statement. The original purpose is resumed after the code block is finished. The decorator and the with statement achieve the following objectives: Both approaches modify the requests.get file.


Check if the tests still pass by running them.

Using a patcher is another technique to modify a function. Now, I first explicitly begin utilizing the mock before identifying the source to patch. The patching continues until I specifically tell my system to cease using the mock.


Repeat the tests to achieve the same positive outcome.

Now that you have seen three different ways to patch a function with a mock, when should you apply each method? The quick response is that it is entirely up to you. Any patching technique is entirely legitimate. Having said that, the following patching techniques perform very well with particular coding patterns.

  1. When every line of code in the body of your test method uses a mock, use a decorator.
  2. When some code in your test function utilizes a fake and other code refers to the real function, employ a context manager.
  3. When you need to start and stop simulating a function across several tests explicitly

Mocking the complete service behaviour

You created a simple fake in the earlier examples and checked a straightforward assertion to see if they get the todos () function returned None. The external API is contacted using the get todos () function, which returns a result. The function produces a response object, which includes a JSON-serialized list of todos if the request is successful. If the request fails, get todos () returns None. I show how to simulate Get Todos' functionality in the example below (). The first cURL call you performed to the server at the start of this tutorial returned a Javascript object list of dictionaries that represented your to-do list. This illustration will explain how to fake that data.

See how @patch() functions: You provide this one a path to the mocked-up function. Once the method is located, patch() creates a dummy object, which is temporarily substituted for the real function. The function utilizes the mock get in the same manner as it would the actual get() method when the test calls get todos (). This indicates that it uses mock get as a function and anticipates receiving a response object in return.

A requests libraries Response object, which includes several characteristics and methods, is the response object in this instance. You lied about one of those characteristics in the previous instance, OK. A function called json() transforms the JSON-serialized string content of the Response object into a Python datatype


I mentioned in a previous example this because when you ran the get todos () function that would have been patched with such a mock, the code returned a mock object "response". You may have observed an arrangement: whenever the return value is added to a mock, it is modified to run as a function and, by default, returns another ridicule object. In this illustration, I clarified that by explicitly declaring this same Mock object, mock get.return value = Mock(ok=True). Requests.get() and requests are mirrored by mock get(). get() yields a Response, whereas mock get() yields a Mock. Because the Response component has an OK property, you added one to the Mock.

You must be certain that the two systems will work well together if you wish to use a third-party API to increase the utility of your application. It would help to verify that the two programs interact predictably and that your tests must run in a controlled environment.

Because the Response object has a json() function, I added json towards the Mock and appended a return value because it will be called like a function. Todo objects are returned by the json() function. The test now would include an assertion that verifies the significance of the response. json(). Ensure that the get todos () function, like the host machine, returns a list of todos. Finally, I include a failure test to complete the laboratory tests for getting todos ().

Run the tests and observe how they pass.


$ nosetest1 --verbosity=2 project
test_todo11.test_getting_todoss_when_response_is_not_ok11 ..... ok
test_todo11.test_getting_todoss_when_response_is_ok1 ..... ok

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ran 2 tests in 0.785s

Mocking integrated functions

So far, the examples I've given you have been straightforward, especially the following example. Consider the following scenario: you write a new service function that calls get todos () and then filters the results to return only completed todo items. Is it necessary to mock the requests? get() once more? No, you directly mock the get todos () function in this case! You only need to be concerned with how the dynamic system interacts with the mock. You already know that get todos () takes no parameters and returns reactions with a json() function that returns a list of important objects. You don't concern yours with what occurs under the hood; all that matters is that they get todos () mock needs to return what you expect.


I've modified the test function to look for and replace todos with a mock. The sneer function should return a json() function-enabled object. When called, the javascript object notation() function should produce an array of to-do objects. I also include an assertion to ensure the get todos () function is called. This is useful for ensuring that the true get todos () function is called when the delivery function calls the actual API. I also include a test to ensure that if get todos () returns None, get uncompleted todos () returns an empty list. I confirm that the get todos () function has been called once more.

Write the tests, run them to see if they fail, and write the code to pass them.

Refactoring tests to use classes

We've undoubtedly noticed that a few tests appear to form a group. Two of our tests make use of the get todos () function. Getting uncompleted todos is the topic of our other two tests (). This refactoring satisfies the following objectives:

  1. You can more easily test them together by moving common test functions to a class. Although you can instruct nose to target a collection of functions, focusing on a single class is simpler.
  2. The processes for producing and erasing the data utilized by each test are frequently the same for functions that are common among tests. The setup class() and teardown class() routines can contain these stages.
  3. To reuse logic that is replicated throughout test functions, you can construct utility functions on the class.

Notice that I use the patcher technique to mock the targeted functions in the test classes. As I mentioned, this patching method is great for creating a mock spanning several functions. The code in the teardown_class1() method explicitly restores the original code when the tests finish.


Run the tests.


$ nosetest1 --verbosity=2 project
test_todo11.TestTodos.test_getting_todoss_when_response_is_not_ok11 ..... ok
test_todo11.TestTodos.test_getting_todoss_when_response_is_ok1 ..... ok
test_todo11.TestUncompletedTodos1.test_getting_uncompleted_todos1_when_todos_is_none1 ..... ok
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
test_todo11.TestUncompletedTodos1.test_getting_uncompleted_todos1_when_todos_is_not_none ..... ok
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ran 4 tests in 0.400s

Testing for available updates to the current API's data

Throughout this article, I've been showing you how to simulate data given by a third-party API. The mock data was created based on the supposition that actual data uses this very same data contract as the fake data. Making a call to the API was your initial step, and you noted the returned information. While you may reasonably be certain that the data's structure has not changed in the short period you have been working through these examples, you shouldn't be certain that it will do so indefinitely. Any reliable external library is consistently updated. While developers aim to make new code backward in time, deprecated code eventually becomes necessary.

As you can expect, relying solely on false information is risky. You risk becoming overconfident in the quality of your tests because you are testing your code without speaking to the real server. Everything breaks down when you try to utilize your programme with actual data. To ensure that the data from the server matches the data you are testing, use the following technique. Instead of comparing the data, the objective here is to compare the data structure.

I was hoping you could take note of the context management patching strategy I'm employing. In this case, you must call the actual server and a mock version separately.

Filename: project/tests/

Conditionally testing scenarios

You must know when to run the test you created to contrast the real data contracts with the mimicked ones. The server test shouldn't be automated because a failure doesn't always indicate your code is flawed. For various circumstances beyond your control, you might not be able to connect to the real server when your test suite is running. Execute this test independently from your test automation, but do so periodically as well. Using an environment variable as little more than a toggle is one method for selectively skipping tests. In the scenario below, if the SKIP REAL Beveridge is not set to True, no tests are run.

When the SKIP_REALS1 variable runs, any test with the @skipIf1(SKIP_REALS1) decorators will be escaped.

Filename: project/tests/

Filename: project/


You have now learned to use mocks to test your app's connectivity with a third-party API. Now that you know how to address the issue, you may continue honing your skills by creating service functions in JSON Placeholder for the remaining API endpoints.

Youtube For Videos Join Our Youtube Channel: Join Now


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Trending Technologies

B.Tech / MCA