Hey, the boot on your neck is for your own protection.
Posts by Michael Wojcik
12336 publicly visible posts • joined 21 Dec 2007
Page:
- ← Prev
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- Next →
FBI: Give us warrantless Section 702 snooping powers – or China wins
Leaked memo: Microsoft employees should be using Copilot too
Honestly, regardless of how "good" Copilot was, I wouldn't use it. I'm not keen to jump on the learned-helplessness bandwagon.
None of the arguments I've seen promoting the use of LLMs for programming assistance, including from people like Matt Welsh, have been at all persuasive. I believe they're wildly mistaken about the actual, long-term costs and benefits.
Europe's deepest mine to become Europe's deepest battery
Re: Pointless waste of money, energy and effort
For pumped-water gravitational-potential storage it would have the advantage of a safer failure mode than "store it on top of a hill" — water flooding the mine is nicer than water flooding the surrounding countryside. But the disadvantage of higher installation and maintenance costs (plumbing up and down a hill is easier than doing it up and down a mineshaft) on what is already a marginal storage solution probably makes it economically infeasible, I'd guess.
Re: BAH
Wasn't there a Top Gear segment where James tried to charge a car using an office building's revolving doors?
Munroe's What If? 2 has a bit on whether a cyclist could power a toaster. You wouldn't want to try.
A cyclist could power an EZ Bake Oven, though. Those used a 100W light bulb, and that's feasible for a human. Of course it's a ridiculously inefficient system, though if you need the exercise anyway then it's not really wasted¸ I suppose. "Yes, I did 10 miles on the stationary bike, and baked a little cake for afterward."
IIRC, there was an issue of Bob Burden's Flaming Carrot comic in which a mad scientist was "harvesting cellulite from socialites" to convert into a new high explosive. I don't recall if the Mystery Men (who later received a film adaptation, though it wasn't very faithful to the source material) were featured.
Fintech engineer grounded by crypto fraud caper, including $300m spoof trades
Re: Cannibals accuses Hannibal Lecter of engaging in cannibalism.
Sure, but the fun thing here is how fraud #1, fintech apps,1 was superseded by fraud #2, introducing a shitcoin (HYDRO), and that in turn by fraud #3, manipulating the price of said shitcoin.
You'd think at some point they'd have realized that for the same effort they probably could have done something legitimate.
1Fraud because said "apps" are rarely useful and almost always dreadfully insecure. Even by the meager standards of the software industry, it's a shady business to be in.
Re: Ponzi
Finally, an actual Ponzi
No it wasn't. There are at least two critical differences here.
First, Ponzi was very likely unaware that he was committing fraud. He was largely innumerate and had no bookkeeping to speak of; he had cash stored in boxes and piled on tables, with apparently no records of transactions. He simply paid exiting parties out of the big pile of cash. His "assistants" probably understood what was really happening, but Ponzi himself did not. See Bulgatz's book.
Second, the distinguishing feature of a Ponzi scheme is that exiting investors are paid from incoming receipts from other investors rather than actual investment returns. That's not at all what was happening here, where the price of a vehicle was being inflated by manufactured speculation. The conspirators profited by selling the inflated security to other investors, but those sales weren't the primary means of propping up the price and generating new interest, which is what characterizes a Ponzi scheme.
Wash sales and other manufactured volume is just plain old securities-trading fraud.
IT suppliers hacked off with Uncle Sam's demands in aftermath of cyberattacks
Well, for one thing, they do.
But for those things they buy (or lease, or contract out, or whatever), why should they? They have money. They offer vendors money in exchange for goods and services. They're free to slap burdensome contract terms on; vendors can take those or leave them. So far, the money has been an effective inducement. I don't see why the Feds have any reason to change how they do things — it's working for them.
If you're in charge of approving a sale, and you don't like their terms, then fine, turn the business down. Someone else will take it.
Goodhart's Law does apply, of course. People will try to game the system.
But that said, discovery may itself be discoverable. At some point an organization will discover the incident, and covering that sort of thing up for all time is risky. It gets progressively riskier as more incidents occur. I think many organizations will realize that the lowest cost will be to comply as best they can, should these go into force.
And, frankly, if your CI/CD mechanism can push a change to production, it can push a change to the SBOM. They're machine-readable, after all — typically something like CycloneDX. Updating the SBOM should take much less time than testing an update to an external component. I have zero sympathy for this particular complaint.
(Yes, one dependency may have a whole host of transitive dependencies, necessitating a fairly large and complex SBOM update. But that also means it needs fairly extensive testing. And perhaps using components that bring in hundreds of transitive dependencies wasn't such a great idea in the first place. Sympathy not incremented.)
Re: So they learnt nothing..
Well, yeah. Arguably the proposed US regulation isn't quite as bad as India's 2022 fumble:
* It applies only to contracts with the Feds.
* The reporting requirement is 8 hours, rather than India's 6 hours. That's a 33% improvement!
* The language from CISA is arguably a bit more specific. According to the Reg, India's requirement was extremely vague and broad ("Unauthorized access of IT systems/data"), while CISA's at least qualifies it a bit more:
Any event or series of events, which pose(s) actual or imminent jeopardy, without lawful authority, to the integrity, confidentiality, or availability of information or an information system; or constitutes a violation or imminent threat of violation of law, security policies, security procedures, or acceptable use policies
Plus some additional more-specific cases around malware and data labeling. See here.
India (via CERT-In) also apparently sprang the rules on businesses with a 60-day deadline for compliance, while the US had a longer comment period and still hasn't made these regulations in force. And I'm hoping that unlike CERT-In, CISA isn't going to allow faxed reports.
But I agree. This approach backfired on India, and as the article says, having various Federal agencies attempting to impose different requirements is a mess.
Republican senators try to outlaw rules that restrict Wall Street’s use of AI
That's what optimization tends to do. Current ML models are broadly speaking not great at optimization.
Anyway, what you're describing is closer to HFT, and you would very very much not use an LLM in HFT. If you're worried about people front-running normal market activity, look toward Musk and SpaceX, because that satellite constellation now can provide a faster link between London and New York — thanks to the sat-to-sat laser mesh, even with the trip up to and down from LEO, it can beat terrestrial fiber. That's a guaranteed money-maker, unlike letting LLMs make your SWAG trades.
Re: Let 'em do it.
Eh?
Knight Capital was a market maker, which is a perfectly respectable and rather mundane sort of financial firm. Their main business was bundling trades to reduce transaction costs.
Knight certainly made mistakes in 2012 — there's a good case study here which goes into details. But none of those mistakes are rare in the software industry (and probably not rare in the financial industry), and none of them are in any way related to using so-called "AI" or other novel approaches to make speculative trading decisions.
LTCM, though also in no way related to "AI", would probably be a more apt example of the sort of errors that happen in the rush for a few more crumbs.
Re: Wallstreet
For many folks, financial markets are what they participate in solely through mutual funds backing their retirement accounts. If those folks diversify and then don't fuck around with their investment allocations or capital, that's still the best average return on investment they're going to see.
Are better systems imaginable? Sure. But today, for the middle class, "Wall Street" is what will support them in their dotage.
Re: So how long until
Optionally you can throw in a prompt for it to fit to.
It's perhaps more accurate to say the initial prompt selects a point in high-dimensional space, and then the gradient descent goes from there, with some random jitter to anneal and get better output. Transformer models aren't recurrent; they just produce output tokens until they hit a "stop" token. So it's not really fitting the output to the prompt in the more usual interpolative sense.
Subsequent prompts take the session so far (or as much of it as will fit in the model's context window) and repeat the process from the beginning.
Transformers aren't very good at predicting time-series data in general, according to various studies, though some researchers have gotten better results than others. They sometimes do a decent job at predicting swings in sentiment (in a statistically-general way, though also they can often do so, rather creepily, for an individual1), and that could be useful for forecasting the whims of the market, though I personally wouldn't want to try that without an explicable model. But my risk tolerance is low.
Large transformer models can spontaneously derive world models; this has been shown, for small worlds,2 in a formally strict fashion using linear probing and other techniques. For example, a model trained only on continuations of descriptions of chess moves was shown to have an internal model of the chess board, the rules governing various pieces, the concept of checkmate, etc.
However, the world that has to be modeled to predict financial markets is more complex than those.3 Empirical evidence shows humans, broadly speaking, suck at modeling that world, despite the fact that each of us has a lot more connectivity than the biggest transformer models currently deployed. So I don't think just training a transformer on a bunch of market data and asking it to produce tokens would work very well. But sentiment prediction ... yeah, maybe. Similarly for, say, predicting the market's reaction to financial news, though you'd want it to be quite fast there in order to get any sort of an edge.
1As seen in the numerous reports by people who have taught a model to emulate a deceased partner, for example.
2For example, for the games of Othello and chess.
3Proof: The actual world is the union of the set comprising the games of Othello and chess, and of the set of things in the world which are neither of those; both sets are non-empty (lemma: the former contains two things, and the latter at least one, namely me) and they are not identical (lemma: I am neither Othello nor chess).
When it comes to working from home, Register readers are bucking national trends
I am frankly rather dubious that there would be a large change to home heating or cooling without WFH, at least in the US where energy costs as a share of income are not typically high enough to encourage people to use significant setbacks for the hours the house is vacant.
When my wife and I were living in the Stately Manor in Michigan, I had a programmable thermostat and did set the heat back during the day when she was at work (we didn't have air conditioning, so nothing to set back in the warmer months). But I did work from home, so in my case WFH didn't affect that practice.
Here at Mountain Fastness 2.0 I don't bother, because we use very little energy for heating, thanks to a lot of thermal mass, a lot of insulation, and significant passive solar gain. (Again, we don't have A/C; it's completely unnecessary here if your house has a halfway decent design, since air temperature drops dramatically as soon as the sun's below the horizon.) Also we have a wood stove, though I've only used it a couple of times because usually it's too warm in the house to want it.
I've known very few other people with programmable thermostats, even though they're readily available and easy to install.
Obviously in theory there are advantages to heating and cooling the same volume in fewer buildings, thanks to the ratio between surface area and volume, and other effects such as greater thermal mass and scaling efficiencies in the equipment, assuming similar levels of insulation and other factors. (Which are some pretty big assumptions.) But my suspicion, without having evidence to hand, is that while you might see a big effect with arcology-style mixed-use buildings (or even something like Whittier, Alaska), in practice WFH doesn't make a huge difference.
One or two days in the office, maybe, enough to build that social rapport. Fully wfh is quite isolating, I find.
I think this depends very strongly on the individual employee, their immediate coworkers, and the organization.
I've worked from home for a quarter of a century now. Prior to that I was in the office every day for a few years, then at a remote office by myself for a few years, then at office every day for a few years.
For me, both environments were pleasant and productive. I'm introverted by inclination but generally get along with people I know (and I've learned to interact with strangers, even if I find it requires more effort). In my experience — which is certainly not universal, and may well not be common — the office was not distracting or oppressive, and while I won't claim to have enjoyed commuting, it was bearable, particularly when I used public transport and could read while traveling. Conversely, I've never found working away from the office left me isolated or "out of the loop".
My employer used to fly me out once or twice a year to meet in person with my teammates, which was nice, as it was something of a working vacation and a chance to socialize with them. That ended quite a few years ago as part of cost-cutting measures, and I do miss it, though now the logistics would be horrendous. (I've gone from a 20-minute drive to a small regional airport to a 2 1/2-hour drive to a large one, and my connecting flight has similarly gone from around half an hour in the air to more than two hours.)
But I talk with my teammates every day, and there's plenty of joking and socializing as well as serious work discussion.
Different approaches will work for different people. I read a piece in CACM during the pandemic about a team that basically stays on a video call for the entire working day. Most of the time they're all heads-down doing their own thing, but they can see and talk to each other spontaneously. Apparently that suited them. I'd find it awful.
Re: It's the commute
Shrug. I'd have Internet service (which is a flat rate here) regardless of whether I used it for work, and the additional electricity consumption is negligible.
IIRC, a couple of decades ago, Micro Focus would let you expense home Internet if you did significant work from home, too. These days that doesn't really make sense.
Re: It's the commute
Isn't it odd how my house has 2 offices
No, 2 is even.
Seriously: Mountain Fastness 2.0, which my wife designed, also has two offices. They're only 8'x8' (~ 2.4m2), but the ceilings are more than 9' (~2.7m) so they seem larger, and that's enough room for a desk, large bookcase, filing cabinet, and some other furnishings. My wife's has a nice large window, and mine has a venting skylight. We both generally work from home, and when we were forced to share space in MtF 1.0 (which is really just a casita) it made things like simultaneous calls difficult.
I use my office for non-work purposes too, so I don't feel the need to pretend my employer ought to be paying me to use it, and I don't claim it as an expense on my income tax return.
Twitter spinout Bluesky ends invite-only phase and opens its doors to all comers
Mozilla adds paid-for data-deletion tier to Monitor, its privacy-breach radar
Re: The Need for Fair Compensation in Data Broker Practices
How would such a law distinguish between these scurrilous "data brokers" and, oh, credit-reporting agencies? Yes, the latter are fairly horrible too, but they serve a critical function in consumer credit, and a significant blow to consumer credit here in the US (for example) would precipitate economic meltdown and huge damage to consumers and businesses alike.
My county has property records online. Would such a law forbid those? How does it make the distinction? Would the law conflict with the public registry of voters that's also required by law? Would it conflict with "sunshine laws", for people in local government? Would it conflict with online directories? Would Google be required to scrub all PII that might end up on it? Would the Internet Archive?
People love to toss out proposals to outlaw this and that. Even when they're ethically justified and represent an arguably appropriate action by the state, it's generally far more difficult to craft a law that's appropriately specific and effective.
Outrage makes for bad legislation.
It's difficult to get legislatures or courts to agree on laws that pierce the corporate veil, both because they enjoy the spoils of capitalism, and because that would run counter to the liberal (in the technical sense) ideology which has dominated European-derived politics throughout the Early Modern and Modern eras, with the exception of the temporary successes in some areas of Communism and Fascism.
The concept of institutional legal personhood is older than the stock corporation, and well entrenched.
That doesn't mean the veil is never pierced, of course; but broadly speaking it happens in specific cases, and is not applied against a general class, except where the state can make an argument of egregious violation — for organized crime, for example. I don't see any realistic hope of it being leveled against data brokers as a class, even if their business were made illegal. (And that's not a trivial proposition, at least because of difficulties in defining it.)
George Orwell had a pretty good premonition of what it'll look like.
Parts will, sure. Other parts will look like Huxley's Brave New World, which is a far more economically effective way of controlling the populace. Just as capitalism is more economically efficient than slavery,1 subornment is cheaper and more effective than repression. It's typically clumsy, precarious totalitarians who institute repressive surveillance regimes; consumer-capitalist leaders (who are generally not the notional political leaders) get the people to enjoy their subjugation and participate willingly.
On the plus side, if you're in a rich country, you have a good shot at a higher quality of life because of this distinction. That's part of the trap, obviously.
1Perhaps most famously argued by Eric Williams, though CLR James claimed to have given him the thesis when they were at university. Williams, of course, later placed James under house arrest, so perhaps the accusation had some sting.
DEF CON is canceled! No, really this time – but the show will go on
Apple Vision Pro is creating a new generation of glassholes
Survey: Over half of undergrads in UK are using AI in university assignments
Re: Plus ça change, plus c'est la même chose
Right now, it isn't.
Indeed. As someone with degrees in Computer Science, English, and Rhetoric, and who's worked with various ML and NLP algorithms and implementations, I heartily endorse this evaluation. I've yet to see an example of LLM-produced prose which rises further than pedestrian. (And as for verse — yikes. It burns.)
And it really doesn't matter whether "AI" in some form will become capable of producing actually competent prose.1 The point of learning writing, at the gen-ed level,2 isn't to make students professional writers. Even making them competent college writers is a secondary goal, because frankly that's not as important, and the ones who want to be competent college writers can get there on their own. (It's not a high bar.) The point is to show them something about how written communication works and functions in society. It's to give them some capability in rhetorical critique. It's to help them become less of a mark for every demagogue and con artist that comes along.
Using a computer to do their writing for them will not achieve that. Or, really, anything other than generating waste heat.
1I've given that matter quite a bit of thought, going back to some years before I wrote my MA thesis on computational rhetoric, and I think it's perfectly achievable. I'm not convinced further scaling and refinement of deep transformer stacks is going to do it, though. I'd use heterogeneous models competing for "attention" doled out by an evaluation model as a first step, with the evaluation model being recurrent, and some of the contributors dealing with things like perceived chronology (which also requires recurrence, unless it has a huge amount of context; see various papers on emulating time series with transformers) and physical aspects of real-world interactions. Wolfram thinks we might get there through adding capabilities in computational language and semantic grammars.
2I have taught gen-ed college writing ("First-Year Composition", in US academia-speak), and as preparation for that had to read a decent body of composition theory and research. I've spent a lot of time with writing teachers and in writing departments. This isn't just a pulled-from-my-ass opinion.
Re: Plus ça change, plus c'est la même chose
Yes. Even if you don't do full-on Fermi estimation in your head (which really isn't hard), just counting up orders of magnitude is a great way to sanity-check basic arithmetic. "Wait, shouldn't that answer have four digits?"
Works nicely for binary and hexadecimal too, once you have a bit of experience.
Re: Plus ça change, plus c'est la même chose
Fun. My initial guess would be a bad electrical connection somewhere, or other power-related failure like a bad capacitor; but of course even with something as simple as a 4-function calculator ALU you can get the occasional bad chip in the yield.
During some of my teen years I worked at an ice cream and sandwich place that still used a mechanical, total-only cash register at the take-out counter. (Fancier models were available; the manager just didn't see any reason to upgrade that one.) We had to sum, add tax, and count out change in our heads. It was a good exercise.
Zvi recently quoted a tweet by Ethan Mollick (haven't tried to confirm this source, since I refuse to use Twitter) stating that in an informal survey of ~250 undergraduates and grad students in his class, nearly all confirmed using "AI", and that "Many used it as a tutor. The vast majority used AI on assignments at least once".
Nearly 100% strikes me as a far more plausible statistic. Of course, that's "over half".
It's just too damned tempting for all but a relative handful of contrarians.
Re: An easy solution
Then, it's just a tool they have used to help them.
I'd call that a dangerous error.
Krakauer distinguishes between "complementary" cognitive technologies and "competitive" ones. LLMs, even when used as support tools, are primarily or exclusively competitive. They absolve the user of thinking. When an LLM is used for research,1 the student often gets at best a shallow answer couched in undeservedly persuasive language. Meanwhile, the student misses out on the intellectual exercise of using research techniques to find sources; of comparing multiple sources to update (confirm/challenge/refute) evaluation of claims; of assessing the quality of sources; of serendipitous discovery of tangential but interesting or useful information; of considering original arguments.
LLMs are dangerous. They inculcate intellectual laziness and learned helplessness. They provide a terribly narrow view of the world.
And, of course, schoolwork is supposed to be work. It's paideia. The whole point is to exercise the mind. Often that's tiresome, and often students don't see the value of it.2 Students often won't see the value, because they're students. If they already knew everything about the subject of study, they wouldn't be students. And students often willfully ignore the value, because, well, exercise is often boring. Too fucking bad. It beats digging coal.
The article quotes someone saying "My primary concern is the significant number of students who are unaware of the potential for 'hallucinations' and inaccuracies in AI". That is not at all my concern. I say the more hallucinations the better; let people learn that this is a bad tool, even if for the wrong reasons.
1I'm not even considering the use of an LLM to generate actual text of a student's submission, which would very likely constitute plagiarism in the universities I've attended or worked at. I'd consider this true of Grammarly (a software product I loathe) as well, when used to "clean up" or "improve" a student's prose. (Grammarly is now very much on the "AI" bandwagon, so it's a faint distinction at best.)
2Zvi, in one of his AI roundups, wrote something to the effect that he doesn't mind if students bypass work that they don't think has value. That's an incredibly blinkered view. (Of course, it's largely inflected by Zvi's own experience of school, as a talented and self-motivated learner bored by the inevitable leveling effects of being in a classroom of mixed ability. I'm not an opponent of tracking, either.)
Re: An easy solution
Because you go to university to learn to think for yourself.
Some people certainly may, but that is not, to a first approximation, what the institution was established for, or how it sees itself.
The histories of the various types of higher education in the European and US traditions are complicated, to put it mildly, and reflect a variety of philosophical positions and sociopolitical programs. But prominent among their goals at various times and places were things like continuity of knowledge and creation of productive citizens. "Thinking for yourself" as a good-in-itself is a relatively recent concept — it is specifically modern, that is it reflects a mindset that the world is in flux and requires new ideas, as well as a commitment to individualism which is certainly not universal in European and European-derived cultures across even the Modern period.
I'm less familiar with scholarship on the intellectual history of universities (or equivalents) in other cultures, but from my experience of and reading about, say, Japanese universities, I can't say "think for yourself" was a prominent motto there.
Re: An easy solution
This is idiotic, to be frank. There's no reliable way of "catching" someone using an LLM, the alleged detectors from snake-oil firms like Turnitin have abysmal false-positive rates, the consequences of a false conviction are far too high, and many students would tie up considerable resources in appeals. And a combative relationship between students and faculty does not encourage learning.
Google flushes cached search results forever
Honestly, I've used it searching for my own older posts to the Reg. Searches for those generally turn up URLs with relative page numbers, i.e. page N of my posts, or of the first Forum page of a particular article. But those numbers change as I submit enough new posts to push them on to page N+1 and eventually further. The cached result was far more reliable.
Page:
- ← Prev
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- Next →