<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hacker News: gchallen</title><link>https://news.ycombinator.com/user?id=gchallen</link><description>Hacker News RSS</description><docs>https://hnrss.org/</docs><generator>hnrss v2.1.1</generator><lastBuildDate>Fri, 08 May 2026 13:45:18 +0000</lastBuildDate><atom:link href="https://hnrss.org/user?id=gchallen" rel="self" type="application/rss+xml"></atom:link><item><title><![CDATA[New comment by gchallen in "Canvas is down as ShinyHunters threatens to leak schools’ data"]]></title><description><![CDATA[
<p>They have not succeeded in forcing me, yet. But it's sad how many computing faculty apparently can't operate the basic online infrastructure needed to support their courses. Not that universities make it easy for us.<p>And of course the other serious concern I have with Canvas is that they are likely using all the materials faculty upload to train their AI replacements. Many of my colleagues engage in dark humor about this but I haven't noticed much action.</p>
]]></description><pubDate>Fri, 08 May 2026 02:46:44 +0000</pubDate><link>https://news.ycombinator.com/item?id=48057887</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=48057887</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=48057887</guid></item><item><title><![CDATA[New comment by gchallen in "Michael Rabin has died"]]></title><description><![CDATA[
<p>I took a course from him as a graduate student. I was not (and am still not) a theoretician. But I enjoyed the class and Professor Rabin's lectures.<p>A friend of mine was one of his graduate students and a teaching assistant for the class. He pointed out to me once that Professor Rabin would state many of his points during lecture twice. Once I started listening more carefully, I found this to be true. It was both subtle and pedagogically effective.<p>English was not his first language, but he enjoyed his struggles with it. I remember him stumbling over the pronunciation of a word during class. Giving up with a smile, he said, "This is a word I know only from books."</p>
]]></description><pubDate>Sat, 18 Apr 2026 21:51:51 +0000</pubDate><link>https://news.ycombinator.com/item?id=47819828</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=47819828</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47819828</guid></item><item><title><![CDATA[New comment by gchallen in "Ask HN: What is it like being in a CS major program these days?"]]></title><description><![CDATA[
<p>I teach computing at the University of Illinois. I'm spending a lot of time thinking about how to adapt my own courses and our degree programs. I'm actually at a workshop about incorporating AI into computing education, so this was a timely post to find this morning.<p>We don't have a coherent message yet. Currently there's a significant mismatch between what we're teaching and the reality of the computing profession our students are entering. That's already true today. Now imagine 2030, when the students we admit today will start graduating. We're having students spend far too much time practicing classical programming, which is both increasingly unnecessary and impedes the ability to effectively teach other concepts. You learn something about resource allocation from banging out malloc by hand, but not as much as you could if you properly leveraged coding agents.<p>Degree programs also take time and energy to update, and universities just aren't designed to deal with the speed of the changes we're witnessing. Research about how to incorporate AI in computing education is outdated before the ink is dry. New AI degrees that are now coming online were designed several years ago and don't acknowledge the emergent behavior we've seen over the past year. Given the constraints faculty operate under, it's just hard to keep up. I'm not defending those constraints: We need to do better at adapting for the foreseeable future. Creating the freedom to innovate and experiment within our educational systems is a bigger and more fundamental challenge than people realize, and one that's not getting enough attention. We have a huge task ahead to update both how and what we teach. I'm incorporating coding agents into my introductory course (<a href="https://www.cs124.org/ai" rel="nofollow">https://www.cs124.org/ai</a>) and designing a new conversational programming course for non-technical students. And of course I'm using AI to accelerate all of this work.<p>Emotionally, most of my colleagues seem to be stuck somewhere on the Kübler-Ross progression: denial (coding agents don't work), anger (coding agents are bad), bargaining (but we still need to teach Python, right?), depression (computing education is over). We're scared and confused too: acceptance is hard when you don't know what's happening next. That makes it hard to effectively communicate with our students, even if there's a clear basis for connection. Also keep in mind that many computing faculty don't code, and so lack a first-hand perspective on what's changing. (One of the more popular posts about how to use AI effectively on our faculty Slack was about correcting LaTex formatting for a paper submission. Sigh.)<p>Here's what I'm telling students. First, if you use AI to complete an assignment that wasn't designed to be completed with AI, you're not going to learn much: not much about the topic, or about how to use AI, since one-shotting homework is not good prompting practice. Second, you have to learn how to use these new tools and workflows. Most of that will need to be done outside of class. Start immediately. Finally, speak up! Pressure from students is the most effective driver of curricular change. Don't expect that the faculty teaching your courses understand what's happening.<p>Personally I've never been more excited to teach computing. I'm a computing educator: I've always wanted my students to be able to build their castles in the sky. It was so hard before! It's easier now. Cue frisson. That's going to invite all kinds of new people with new ideas into computing, and allow us to focus on the meaningful stuff: coming up with good ideas, improving them through iterative feedback, understanding other problem domains, and caring enough to create great things.</p>
]]></description><pubDate>Mon, 16 Mar 2026 12:06:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=47397860</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=47397860</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=47397860</guid></item><item><title><![CDATA[New comment by gchallen in "First impressions of Claude Cowork"]]></title><description><![CDATA[
<p>I've built several bespoke "apps" that are essentially Claude Code + a folder with files in it. For example, I have Claude Coach, which designs ultimate frisbee workouts for me. We started with a few Markdown files—one with my goals, one with information about my schedule, another with information about the equipment and facilities I have access to, and so on. It would access those files and use them to create my weekly workout plans, which were also saved as files under the same folder.<p>Over time this has become more sophisticated. I've created custom commands to incorporate training tips from YouTube videos (via YT-DLP and WhisperX) and PDFs of exercise plans or books that I've purchased. I've used or created MCP servers to give it access to data from my smart watch and smart scale. It has a few database-like YAML files for scoring things like exercise weight ranges and historical fitness metrics. At some point we'll probably start publishing the workouts online somewhere where I can view and complete them electronically, although I'm not feeling a big rush on that. I can work on this at my own pace and it's never been anything but fun.<p>I think there's a whole category of personal apps that are essentially AI + a folder with files in it. They are designed and maintained by you, can be exactly what you want (or at least can prompt), and don't need to be published or shared with anyone else. But to create them you needed to be comfortable at the command line. I actually had a chat with Claude about this, asking if there was a similar workflow for non-CLI types. Claude Cowork seems like it. I'll be curious to see what kinds of things non-technical users get up to with it, at least once it's more widely available.</p>
]]></description><pubDate>Fri, 16 Jan 2026 00:06:49 +0000</pubDate><link>https://news.ycombinator.com/item?id=46641296</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=46641296</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46641296</guid></item><item><title><![CDATA[New comment by gchallen in "Ask HN: What would you do if you didn't work in tech?"]]></title><description><![CDATA[
<p>Teach high school English.</p>
]]></description><pubDate>Mon, 22 Dec 2025 17:00:12 +0000</pubDate><link>https://news.ycombinator.com/item?id=46355925</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=46355925</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46355925</guid></item><item><title><![CDATA[New comment by gchallen in "Implications of AI to schools"]]></title><description><![CDATA[
<p>As a teacher, I agree. There's a ton of covert AI grading taking place on college campuses. Some of it by actual permanent faculty, but I suspect most of it by overworked adjuncts and graduate student teaching assistants. I've seen little reporting on this, so it seems to be largely flying under the radar. For now. But it's definitely happening.<p>Is using AI to support grading such a bad idea? I think that there are probably ways to use it effectively to make grading more efficient and more fair. I'm sure some people are using good AI-supported grading workflows today, and their students are benefiting. But of course there are plenty of ways to get it wrong, and the fact that we're all pretending that it isn't happening is not facilitating the sharing of best practices.<p>Of course, contemplating the role of AI grading also requires facing the reality of human grading, which is often not pretty. Particularly the relationship between delay and utility in providing students with grading feedback. Rapid feedback enables learning and change, while once feedback is delayed too long, its utility falls to near zero. I suspect this curve actually goes to zero much more quickly than most people think. If AI can help educators get feedback returned to students more quickly, that may be a significant win, even if the feedback isn't quite as good. And reducing grading burden also opens up opportunities for students to directly respond to the critical feedback through resubmission, which is rare today on anything that is human-graded.<p>And of course, a lot of times university students get the worst of both worlds: feedback that is both unhelpful and delayed. I've been enrolling in English courses at my institution—which are free to me as a faculty member. I turned in a 4-page paper for the one I'm enrolled in now in mid-October. I received a few sentences of written feedback over a month later, and only two days before our next writing assignment was due. I feel lucky to have already learned how to write, somehow. And I hope that my fellow students in the course who are actual undergraduates are getting more useful feedback from the instructor. But in this case, AI would have provided better feedback, and much more quickly.</p>
]]></description><pubDate>Tue, 25 Nov 2025 16:48:40 +0000</pubDate><link>https://news.ycombinator.com/item?id=46047724</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=46047724</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=46047724</guid></item><item><title><![CDATA[New comment by gchallen in "Trying to teach in the age of the AI homework machine"]]></title><description><![CDATA[
<p>Immediate feedback from a good autograder provides a much more interactive learning experience for students. They are able to face and correct their mistakes in real time until they arrive at a correct solution. That's a real learning opportunity.<p>The value of educational feedback drops rapidly as time passes. If a student receives immediate feedback and the opportunity to try again, they are much more likely to continue attempting to solve the problem. Autograders can support both; humans, neither. It typically takes hours or days to manually grade code just once. By that point students are unlikely to pay much attention to the feedback, and the considerable expense of human grading makes it unlikely that they are able to try again. That's just evaluation.<p>And the idea that instructors of computer science courses are in a position to provide "expert feedback" is very questionable. Most CS faculty don't create or maintain software. Grading is usually done by either research-focused Ph.D. students or undergraduates with barely more experience than the students they are evaluating.</p>
]]></description><pubDate>Tue, 27 May 2025 17:35:14 +0000</pubDate><link>https://news.ycombinator.com/item?id=44109025</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=44109025</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44109025</guid></item><item><title><![CDATA[New comment by gchallen in "Trying to teach in the age of the AI homework machine"]]></title><description><![CDATA[
<p>We've been doing this at Illinois for 10 years now. Here's the website with a description of the facility: <a href="https://cbtf.illinois.edu/" rel="nofollow">https://cbtf.illinois.edu/</a>. My colleagues have also published multiple papers on the testing center—operations, policies, results, and so on.<p>It's a complete game changer for assessment—anything, really, but basic programming skills in particular. At this point I wouldn't teach without it.</p>
]]></description><pubDate>Tue, 27 May 2025 17:27:37 +0000</pubDate><link>https://news.ycombinator.com/item?id=44108948</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=44108948</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=44108948</guid></item><item><title><![CDATA[Ask HN: What book should my CS1 students read?]]></title><description><![CDATA[
<p>tl;dr: I teach introductory computer science (CS1) at the University of Illinois (https://www.cs124.org/). What book should my students read to help introduce them to the field of technology?<p>In past years I assigned chapters from "Coders" by Clive Thompson for students to read, assessed by a few points of multiple-choice questions on our weekly quizzes. My goal was to complement the technical content and get students thinking about some of the broader issues surrounding technology. I think "Coders" does a nice job of this—covering some of the history of computing, discussing mental health challenges associated with software development, and providing well-reasoned arguments on sensitive topics such as gender and meritocracy.<p>Maybe I should just bring back "Coders". It was written before the rise of generative AI, but still holds up fairly well.<p>But I thought I'd ask this community for additional ideas. Note that this could either be required for some small amount of credit, or optional, incentivized with a small amount of extra credit. I'm also receptive to different or more open-ended goals, which is probably reflected in some of the ideas listed below. Generally speaking, I'd like the book to counterbalance an observed tendency among students in my course towards not being wary enough about the computing technology that they will one day participate in creating.<p>To get you thinking, a few options that I've been considering:<p>* "The Circle" by Dave Eggers. One of the better satirical takes I've read on our modern technology-centered era. Unfortunately includes some problematic sexual content.
* "1984" by George Orwell. Or "Brave New World" by Aldous Huxley. I'd probably lean toward "1984", since it just seems like a better match for the present moment somehow.
* "Weapons of Math Destruction" by Cathy O'Neill, or one of the many similar cautionary technology tales. One concern is that these tend to be somewhat more focused on one or two specific topics, and don't provide as nice of an overview as something like "Coders".
* "Unmasking AI" by Joy Buolamwini, or something similar that mixes biography and technology criticism. Similar specificity concerns here to the group above.<p>Excited to hear your ideas! Thanks in advance. At this point about 2,000 students take my course each year, so whatever I choose does have the potential to impact more than a few young people.</p>
<hr>
<p>Comments URL: <a href="https://news.ycombinator.com/item?id=43890075">https://news.ycombinator.com/item?id=43890075</a></p>
<p>Points: 5</p>
<p># Comments: 5</p>
]]></description><pubDate>Sun, 04 May 2025 22:17:22 +0000</pubDate><link>https://news.ycombinator.com/item?id=43890075</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=43890075</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=43890075</guid></item><item><title><![CDATA[New comment by gchallen in "Duo Outage"]]></title><description><![CDATA[
<p>And the proud Illinois tradition of some mission-critical service crashing on the first day of class continues.<p>In this case, it is an external service. However, I also suspect that the Duo outage is probably shielding other on-campus services from load surges that would probably be causing them to get crashy.<p>I guess I don't know how we could ever prevent such incidents. Given that the first day of classes is a well-kept secret /s.</p>
]]></description><pubDate>Mon, 21 Aug 2023 15:40:39 +0000</pubDate><link>https://news.ycombinator.com/item?id=37211053</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=37211053</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=37211053</guid></item><item><title><![CDATA[New comment by gchallen in "Supreme Court strikes down affirmative action in college admissions"]]></title><description><![CDATA[
<p>> That said I’m not sure how much gender based affirmitive action there is in science/engineering today.<p>Potentially quite a bit. Here's some recent data about admissions into the highly-competitive Illinois CS program: <a href="https://www.reddit.com/r/UIUC/comments/12kwc4a/uiuc_cs_admissions_demographics_data_since_2019/" rel="nofollow noreferrer">https://www.reddit.com/r/UIUC/comments/12kwc4a/uiuc_cs_admis...</a><p>Note that admissions rates for female applicants are higher across all categories—international, out-of-state, and in-state. Obviously you can't fully tell what's going on here without more of an understanding of the strengths of the different pools, but a 10–30% spread (for in-state) suggests that gender is being directly considered.<p>IANAL, but I'm also concerned about the degree to which this decision affects the use of other factors during college admissions. Fundamentally admissions is a complex balance between prior performance and future potential, and only admitting based on prior performance means that we're stuck perpetuating existing societal inequities.</p>
]]></description><pubDate>Thu, 29 Jun 2023 15:23:36 +0000</pubDate><link>https://news.ycombinator.com/item?id=36521659</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=36521659</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36521659</guid></item><item><title><![CDATA[New comment by gchallen in "ASU: The New American University (2021)"]]></title><description><![CDATA[
<p>As a CS faculty member at Illinois (aka UIUC), I don't think that we fit this model.<p>At least according to my quick reading of the article, ASU has a significant focus on inclusion as a core value. Overall Illinois does admit a large percentage of applicants: about 50% over recent years. (The number dropped a bit after we began participating in the Common App, which makes it easier for students to increase the number of institutions they apply to.)<p>However, that number hides the fact that admission to top programs like computer science is extremely selective and exclusive. Admission rates to CS have been around 7% recently. And while we've made a CS minor somewhat more accessible, we've also closed down pathways that allowed students to start at Illinois and transfer into a computer science degree. (At this point that's pretty much impossible.) We do have blended CS+X degree programs that combine core studies in computer science with other areas, and those are less selective, but they have their own limitations—specifically, having to complete a lot of coursework in some other area that may not interest you.<p>I think what's fooling you about Illinois is the fairly odd combination of a highly-selective department (CS) embedded in a less-selective institution. I'm sure that there are other similar pairings, but overall this is somewhat unusual. If you think about other top-tier CS departments—Stanford, Berkeley, MIT, CMU—most are a part of an equally-selective institution.<p>So with Illinois you're getting the cache of an exclusive department combined with the high acceptance rate of an inclusive public land-grant university. But on some level this is a mirage created by colocated entities reflecting different value systems. And, unlike places like Berkeley and Virginia, which have been trying to admit more students into computing programs, no similar efforts are underway here at Illinois. (To my dismay.)<p>Overall, unfortunately it's still very obvious to me that exclusivity is part of what we're selling to students as a core value of our degree program. You're special if you got in—just because a lot of other people didn't. Kudos to anyone moving away from this kind of misguided thinking.</p>
]]></description><pubDate>Wed, 21 Jun 2023 17:46:04 +0000</pubDate><link>https://news.ycombinator.com/item?id=36421849</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=36421849</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=36421849</guid></item><item><title><![CDATA[New comment by gchallen in "Pain points of teaching computer science"]]></title><description><![CDATA[
<p>I've taught CS1 for going on 6 years now, to almost 10K students. I'll admit I found this post depressing to read. Because many of these pain points have obvious solutions that have been in use at many institutions for years. (Some of these solutions are in the paper, but are not novel.)<p>When / where are students struggling? Assess them frequently and you'll find out! We run weekly quizzes in my class. So we know exactly who's struggling with exactly what, and quickly. That allows us to do individual outreach, and for students to catch up before they get too far behind. We also use daily homework, for the same reasons. But a lot of CS1 courses are still using the outdated midterm and final model, maybe with some homework sprinkled in.<p>Frequently a glut of repetitive student questions points to bad course materials or poor course design. Make things more clear and make it easier for students to find information and at least some of the repetitive question asking will diminish.<p>Grading and TA support are related. Graduate TA quality does vary greatly, and you need to design around this. For example: Never put students in a position to suffer for an entire semester at the hands of a bad TA. (Many courses do.) Undergraduates are almost always better at assisting with early CS courses, and usually cheaper. We've been shifting gradually toward more undergraduate support for our CS1 course, and it has been working out well. They frequently outperform graduate staff.<p>But no amount course staff will be sufficient if you have them spend all of their time on tedious tasks that computers can do better: Like grading code! It's 2023. If you can't deploy your own autograder, buy one. Staff time grading code should be minimized or eliminated altogether. Freeing staff time for student support allows you to provide students with more practice, and accelerates the overall learning process. But many early CS courses are stuck in a situation where staff grading is bottlenecking how many problems they can assign. That's insane, when autograding is a well-established option. (Even if you want to devote some staff time to grading code quality, autograding should always be used to establish correctness. And you can automate many aspects of code quality as well.)<p>In my experience, what's at the root of a lot of these problems is simply that many people teaching introductory CS can't build things. Maybe they can implement Quicksort (again), but they can't create and deploy more complex user-facing systems. I mean, you can create an autograder using a shell script! Not a great one, but still far superior to manual human grading. Part of this is because these jobs pay poorly. Part is how we hire people for them, because the ability to build things isn't typical a criteria. Part of it is that there's little support for this in academia. It took me years of inane meetings to get a small cluster of machines to run courseware on for my 1000+ student class that generates millions of dollars in revenue.<p>But there's also a degree to which the CS educational community has started to stigmatize expert knowledge. If you do enjoy creating software and are good at it, you get a lot of side eye from certain people. "You know that students don't learn well from experts, right?" And so on. Yes, there is a degree to which knowing how to do something is not the same as being able to teach someone how to do it. But would you take music lessons from someone who was not only a mediocre player, but didn't seem to like music that much at all?</p>
]]></description><pubDate>Tue, 09 May 2023 23:10:31 +0000</pubDate><link>https://news.ycombinator.com/item?id=35881556</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=35881556</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35881556</guid></item><item><title><![CDATA[New comment by gchallen in "More than a third of community college students have vanished"]]></title><description><![CDATA[
<p>Absolutely. The article makes it sound like problems with poor advising are confined to community colleges. That's ludicrous.<p>And it's not just state schools. When my wife went back to RIT (Rochester Institute of Technology, an expensive private school) to study photography, nobody there could even answer questions like: What courses do I need to graduate? Knowing that might be useful!<p>A lot of the student management systems used at many schools are extremely antiquated. Degree requirements are expressed in ways that make it difficult or impossible to create tools to help students plan their schedules and stay on track to receive their degree on time. Registration can make it hard for students to enroll in courses they need to complete their program. Many advisers are doing their best, and come up with hacks and workaround to try and help individual students, but access to advising can be limited and advisers are also affected by the poor data and tooling.<p>And of course, it's worth pointing out that universities are positioned to benefit from student mistakes. Didn't realize that you needed that course to graduate? See you next semester! Make sure to bring your (or your family's) checkbook. I'm sorry you couldn't register for that popular course as an undergraduate! Have you considered our MS programs?<p>These problems are not limited to community colleges.</p>
]]></description><pubDate>Tue, 11 Apr 2023 15:04:03 +0000</pubDate><link>https://news.ycombinator.com/item?id=35526165</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=35526165</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35526165</guid></item><item><title><![CDATA[New comment by gchallen in "What does a research grant pay for?"]]></title><description><![CDATA[
<p>One of the biggest problems with overheads is that administrators are frequently simply unable to explain or justify them.<p>The vast majority of research-active universities in the US are non-profit, and so it doesn't really make sense to say that the universities is "profiting" off of overheads. Overheads are also negotiated with each funding agency, and my understanding is that the university has to prepare a fairly detailed case to support their particular number. They can't just make something up.<p>But overhead rates can vary fairly significantly between similar institutions. And when, as a faculty member, you start asking questions about why, you're frequently met with defensiveness, nonsense, or both. When I worked in the Northeast, I remember someone claiming that our overheads were 10+% higher than another institution due to... snow removal. (Which wasn't that effective either.)<p>Overheads also seem to only go up. Because of course.<p>Provide a reasonable justification for why you're charging me a certain amount, and how the money gets spent, and we're cool. I understand the university is a complex place and there's a lot involved in supporting what I'm doing. But when you can't provide a reasonable explanation, people start to wonder why not.</p>
]]></description><pubDate>Mon, 10 Apr 2023 02:29:34 +0000</pubDate><link>https://news.ycombinator.com/item?id=35508574</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=35508574</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35508574</guid></item><item><title><![CDATA[New comment by gchallen in "What does a research grant pay for?"]]></title><description><![CDATA[
<p>This varies a lot from faculty to faculty and from position to position. Some will work nonstop regardless of how many grants they have. (If you have grants, use them; if not, write them.) For pre-tenure research faculty, behaving this way is pretty much expected.<p>Others who are more senior and less research active may shoo their students off to internships (particularly in CS) and spend the summer doing very little. A lot of departmental committee work is paused, and people tend to know who is and is not available over the summer and will make arrangements accordingly. (I.E., don't assign Professor Tee Time to a committee that needs to function over the summer.)</p>
]]></description><pubDate>Mon, 10 Apr 2023 02:10:45 +0000</pubDate><link>https://news.ycombinator.com/item?id=35508473</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=35508473</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35508473</guid></item><item><title><![CDATA[New comment by gchallen in "For Lower-Income Students, Big Tech Internships Can Be Hard to Get"]]></title><description><![CDATA[
<p>I find this unsurprising. But I think the problem is upstream.<p>Getting into a top-tier CS program has gotten incredibly competitive, and college has gotten incredibly expensive. At Illinois (where I teach), the university admits 40% of its applicants. Our CS program admits 7%, and a demographic profile that bears no relationship to the broader university population. Relevant to this article, far more out-of-state students, and therefore students from wealthier backgrounds. (Because our out-of-state tuition is insane.)<p>So if lower-income students can't get internships, it's probably because it's hard for them to get into a decent CS program in the first place.<p>And from where I sit, the situation is even more frustrating, because we have thousands of data points showing that students drawn from the general university population can and do succeed in our CS courses—at least those required for the minor, which has become increasingly popular. But a lot of CS departments have felt increasingly besieged over the past few decades, as we've been swamped with students and frequently not been provided with appropriate resources. (Although there's also plenty of programs out there insisting on doing things that don't scale well, which exacerbates the problem.) So increasingly the "answer" is to clamp down on admissions, in ways that usually disproportionately affect certain populations.<p>We have a lot of ongoing BPC and DEI efforts in my department. But there's very little if any focus on admissions. I asked recently, and apparently we don't even know the demographic breakdown of our applicant pool.<p>Regardless of what aspects of diversity you care about, one of the biggest sources of inequity in CS today is in university admissions.</p>
]]></description><pubDate>Fri, 07 Apr 2023 15:25:05 +0000</pubDate><link>https://news.ycombinator.com/item?id=35482572</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=35482572</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35482572</guid></item><item><title><![CDATA[New comment by gchallen in "Dropping the SAT requirement is a luxury belief"]]></title><description><![CDATA[
<p>Related reading which I didn't see posted yet: <a href="https://www.theatlantic.com/ideas/archive/2022/04/mit-admissions-reinstates-sat-act-tests/629455/" rel="nofollow">https://www.theatlantic.com/ideas/archive/2022/04/mit-admiss...</a><p>Lot of great stuff in there, but here's one quote:<p>> Richer students don’t just get better SAT scores. They also tend to outperform on everything else that an admissions committee would use to select students. Personal essays? Their style and content are more strongly correlated with family income than SAT scores are. Recommendation letters? They are subject to teachers’ classist and racist biases, and even knowing how to request the letters requires significant social capital.</p>
]]></description><pubDate>Sun, 05 Mar 2023 16:09:53 +0000</pubDate><link>https://news.ycombinator.com/item?id=35030856</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=35030856</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=35030856</guid></item><item><title><![CDATA[New comment by gchallen in "Early morning university classes linked to poor sleep and academic performance"]]></title><description><![CDATA[
<p>One thing to always note in these studies, particularly at the college level, is whether students were randomly-assigned to different sections. (I glanced at the study and couldn't confirm that this was how it was done, although they do reference studies that show similar results for random assignment.)<p>The reason is that registration at universities has become akin to airline boarding, comprising a complex schedule of priorities allowing certain privileged groups to register first—students in various honors programs, students enrolled in the same major as the course, athletes, and so on. As a result, more desirable class times (or instructors) will frequently be populated with more successful or well-prepared students compared with unfavorable times. (And 8AM is about as unfavorable as it gets.)<p>Whether or not registration prioritization makes any logical sense or serves any objectives other than making some students feel special is another question. (It's certainly not optimized around minimizing time to degree. Because why would we care about that? /s) And clearly it can have the effect of putting students who are already at risk into schedules that place them more at risk. But the effects of non-random assignment can also pollute studies like this.<p>Why we're holding classes at 8AM at all is another great question. You'd be tempted to answer "because space", but you might be surprised to find that many large classrooms are idled as early as mid-afternoon. Students seem to also dislike afternoon courses, but I suspect that faculty preferences are more the reason that those time slots don't get used.</p>
]]></description><pubDate>Tue, 21 Feb 2023 19:24:56 +0000</pubDate><link>https://news.ycombinator.com/item?id=34885694</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=34885694</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34885694</guid></item><item><title><![CDATA[New comment by gchallen in "CheatGPT"]]></title><description><![CDATA[
<p>Language ability, manner of speaking, physical stature and presentation, reputation from previous interactions with staff, you name it—all worsened by the fact that many of these are probably going to be done by course staff. It's not clear to me that any other form of assessment has as much potential for subconscious bias.<p>Orchestras started using privacy screens for auditions for a reason. And I'm not familiar with an equivalent for the human voice, particularly for hiding halting, labored, or elliptical speech—possibly by a non-native speaker—that they could straighten out on the page.</p>
]]></description><pubDate>Mon, 20 Feb 2023 22:29:55 +0000</pubDate><link>https://news.ycombinator.com/item?id=34873822</link><dc:creator>gchallen</dc:creator><comments>https://news.ycombinator.com/item?id=34873822</comments><guid isPermaLink="false">https://news.ycombinator.com/item?id=34873822</guid></item></channel></rss>