if chatGPT actually had a discernable style
Unidirectional-transformer LLMs tend to end up in one of many, many "style basins" in parameter space based on the gradient traced by the context window. Remaining in that basin while generating text in that session helps verisimilitude, since human authors tend to a degree of stylistic consistency (as you saw in your own work, and has been observed all over the place basically since various cultures started developing their versions of rhetoric and literary analysis).
What the study described in the article shows is that populating the context window with a prompt for a particular type of scientific paper tends, with high probability, to land in a style basin that's detectably distinct from the style conventions that dominate in that genre. That's probably due to a combination of the imprecision of the model (ChatGPT has a lot of parameters, but obviously it's still very lossy; modern models have a lot more, so should have somewhat better precision) and the relatively small population of training texts from this specialized genre.
If you ask ChatGPT to write, say, Harry Potter fanfic,1 you'd probably get much better adherence to genre conventions.
1And I am not for a minute suggesting you do so, though I am reminded of Rowell's fine novel Fangirl now that I bring up the subject.