Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a bytes-like object is required, not 'str' #298

Open
sarim-sikander-turing opened this issue May 1, 2022 · 11 comments
Open

a bytes-like object is required, not 'str' #298

sarim-sikander-turing opened this issue May 1, 2022 · 11 comments

Comments

@sarim-sikander-turing
Copy link

sarim-sikander-turing commented May 1, 2022

There is a issue in analytics.py.
Traceback (most recent call last): File ".\api.py", line 60, in <module> async_data.append(LineItem.async_stats_job_data(account, url=result.url)) File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\analytics.py", line 115, in async_stats_job_data response = Request(account.client, 'get', resource.path, domain=domain, File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\http.py", line 70, in perform raise Error.from_response(response) File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\error.py", line 45, in from_response return ERRORS[response.code](response) File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\error.py", line 13, in __init__ if response.body and 'errors' in response.body: TypeError: a bytes-like object is required, not 'str'

This error is caused when running the analytics.py file. Please fix this issue.

@pep4eto1211
Copy link

Has anyone been able to find a solution for this. It appears to still be an issue with the latest version.

@rafabandoni
Copy link

Still getting the same error today.

@rafabandoni
Copy link

So, apparently the 30 seconds sleep is too fast. I had a proper url return by increasing the sleep time for 300 seconds. Idk how this will impact the cloud usage yet, so I'll run a few more tests and update this thread if discover anything else.

@pep4eto1211
Copy link

Is there a more reliable way to find out if a report is ready or not? The sleep method looks way too "hacky" for me.

@rafabandoni
Copy link

Is there a more reliable way to find out if a report is ready or not? The sleep method looks way too "hacky" for me.

Idk :(
These were the changes I made to get the proper data from the url after the sleep:

async_data = []
for result in async_stats_job_results:
  print(result)
  url = result.url
  #async_data.append(LineItem.async_stats_job_data(account, url=result.url))
  
response = requests.get(url)
data = gzip.decompress(response.content)

data = json.loads(data.decode('utf-8'))

@pep4eto1211
Copy link

Just posting an update here as it seems none of the developers are actually looking at these issues.
The error we are seeing is not indicating the actual fault. Rather it is the library failing to provide additional information about the real error. Basically their code that is supposed to show what the error is, is failing and this is its error. As such, this means that any failure might have happened to cause this. As a workaround I found that you can print the response's HTTP status code here: https://github.com/twitterdev/twitter-python-ads-sdk/blob/a3dd5819341e77aa469d0b4b3399f0bcd028c80c/twitter_ads/http.py#L69 by reading the response.code property. This might at least point you in the right direction. Unfortunately I was unable to find a way to also print the response body, as it is а bytes object and I don't know what the encoding is.
In addition: my error was 403- which is extremely weird, considering I was able to paste the result's URL and download the generated report just fine. Download even works in incognito mode- making me thing that the file downlead requires no authentication whatsoever (also seen in @rafabandoni's comment).
Overall the code quality for this library is extremely low and the only reason I keep using it is because it handles the cumbersome auth.
@rafabandoni I actually found a way to do checks if the report generation is complete. Using the job ID, you can retrieve the job using the async_stats_job_result function. This object has a status property:

{
    "id": "*****",
    "status": "SUCCESS",
    "url": "*****",
    "created_at": "2022-08-10T10:39:25Z",
    "expires_at": "2022-08-12T10:39:38Z",
    "updated_at": "2022-08-10T10:39:38Z",
    "start_time": "2022-06-21T04:00:00Z",
    "end_time": "2022-06-22T04:00:00Z",
    "entity": "CAMPAIGN",
    "entity_ids": [
        "*****"
    ],
    "placement": "ALL_ON_TWITTER",
    "granularity": "DAY",
    "metric_groups": [
        "ENGAGEMENT",
        "BILLING"
    ]
}

A simple while with some waiting time and periodic checking should be a better way to do this than waiting an arbitrary amount of time.

These were the changes I made to get the proper data from the url after the sleep:

Thanks for this- I'll make use of it.

@tushdante
Copy link
Collaborator

hey all - thank you for the thoughtful discussion. given bandwidth constraints we haven't been able to look into this issue as soon as we'd liked and given that this seems to affect a lot of users i'd like to dedicate some time towards a fix.

@pep4eto1211 I'd love to hear your thoughts on what we can be doing better in terms of code quality and feature sets that are missing.

generally speaking when it comes to fetching analytics we've got a general algorithm available in our documentation page which outlines how to use the status field to determine when the files are ready.

@ttarom
Copy link

ttarom commented Dec 29, 2022

Hey all, I got the same issue as well. Any updates on this?

@rafabandoni
Copy link

rafabandoni commented Dec 29, 2022

Hey all, I got the same issue as well. Any updates on this?

Nothing on my side, we finished the project using the workaround mentioned before and I have no longer worked with Twitter API since then.

@oleks-ufo
Copy link

Steel getting this error.

@brunopini
Copy link

How is this still not fixed? Soon it'll be 2 years since the first issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants