Acknowledged. This will be fixed soon. Thanks
Best posts made by info-creaticode
-
RE: Some avatars aren't shown to select until searched for
-
How to record and share screen recordings as gifs
Introduction
When you need to explain how a project should work, the best way is to make a screen recording of the stage. This short article will explain which tools you can use to record the screen, and also how to share the gif file to get a URL for it.
Screen Recording on Windows
If you are using a Windows computer, we recommend a free tool named “ScreenToGif”, which you can download here: https://www.screentogif.com/
Here is a short video explaining how to use it: https://www.yo utube.com/watch?v=ELfCBzN1Mtc
Screen Recording on Macbook
If you are using a Macbook, you can use a free tool called “Giphy Capture”: https://giphy.com/apps/giphycapture
Here is a short video showing how to use it: https://www.yo utube.com/watch?v=m-4cJMBGfS4
Share your gif file
Once your gif file is ready, you can share it this way:
-
Go to the “My Stuff” page at https://play.creaticode.com/mystuff, and make sure you are logged in.
-
Select “My Files” tab on the bottom left
-
Click the “+Share a New File” button on the top right
-
In the pop up window, click “Click to upload file”, and then select the gif file from your computer.
-
Click the green button that says “upload and share with the CC Attribution license” at the bottom.
After that, you will see the new file in the list of files, and then you can click “Copy file URL” button for that file, which will copy the URL to the clipboard, which would look like this: https://ccdn.creaticode.com/user-files/BfemEPCxatY6MMAPs/castfireball.gif
-
-
Wishlist for New Models
If you are looking for a particular 3D model that is not in our library, please reply to this topic. If you can include an example picture of it, it would be more clear what you are looking for. We will try to fulfill these requests as best as we can. Please make sure you do not submit duplicate requests by searching for it first.
Thanks
CreatiCode Support -
RE: Current error with accessing the website
Hi all,
Sorry there was an outage on our platform earlier this morning. Sorry about the inconvenience. Now we are back online.
CreatiCode
-
RE: Code block presets?
You are right. We should allow users to submit new extensions.
However, the code snippet library should be more like backpack than extension, since it will allow users to modify the code blocks after a snippet is imported. If it is an extension, then users won’t see its implementation and also won’t be able to customize it.
-
Shape-Based Particle Emitters
Introduction
You learned about Single-Point Particle Emitter, which generates particles from a single point in the 3D space.
In this article, we will discuss “shape-based” emitters, which generate particles from within a 3D shape like a box. They allow us to produce very different visual effects.
Box Emitters
The box emitter is simply a transparent 3D box, and it can generate new particles from any random point inside this box. We can not see this box since it is transparent, but we can indirectly see its shape by where the particles are generated.
To use the box emitter, we need 2 steps:
- Select the “Box” shape when creating the emitter
- Configure the size of the box by its minimum and maximum X/Y/Z positions
Here is a simple example:
This program creates a box emitter that is 400 units in each dimension. For example, in the X dimension, the minimum is -200 and the maximum is 200, so the x position of new particles can be any random value between -200 and 200. When you run this program, you can see the particles are all confined within this box-shaped region:
Sphere Emitters
The shape of the emitter can also be a sphere. When we configure the sphere shape, we can set its size using the “radius” parameter.
There is also a “range” parameter (between 0 and 1), which controls the range of possible values along the radius. You can think of “range” as “thickness”:
- When “range” is 0, the particles will only be generated on the surface of the sphere, and not inside it.
- When “range” is 1, the particles can be generated at any random point on the surface or inside the sphere.
- When “range” is between 0 and 1, the particles will not be generated near the center of the sphere, but can be generated on the surface or near the surface.
Here is an example program with a range of 0:
As shown, particles are only appearing on the surface of the sphere:
Hemisphere Emitters
Hemisphere emitters generate particles from a half-sphere shape. You can specify its radius and range the same way as the sphere emitters.
One thing special about hemisphere emitters is that we can rotate the half sphere to different directions. As shown below, when we rotate it around X-axis for 90 degrees, the new particles are all generated at the bottom half of the sphere:
Cylinder Emitters
The emitter can also take a cylinder shape. We can control the radius of the circle, and also the height of the cylinder.
The “range” parameter also works for cylinders. You can think of it as controling the “thickness” of the cylinder’s skin. For example, when radius 0.5, the thickness of the cylinder skin is half of the radius:
The cylinder shape also supports an additional parameter of “direction randomness”. This only matters when we set the particles to not face the camera all the time, and we will see the particles facing different random directions when we set “direction randomness” to 100:
Lastly, we can rotate the cylinder emitter as well. For example, we can make it “lie down”:
Cone Emitters
For the cone-shaped emitter, we can console these parameters:
- Radius: The radius of the bottom circle of the cone
- Angle: The opening angle of the cone. Note that the angle and the radius would imply the height of the cone, so we won’t need another “height” parameter.
- Radius Range: This range value applies along the radius direction. When it is 0, the particles will only be generated on the surface of the cone. When it is 1, the particles may come out from anywhere inside the cone or on its surface.
- Height Range: This range value applies along the height of the cone. When it is 0, the particles will only emerge from the bottom of the cone, and when it is 1, the entire height can generate particles.
We can rotate cone emitters as well.
-
RE: Multiplayer 3D games
We are working on fixing the cloud blocks. Should be within next few days. Sorry about that.
-
Number of Seconds since 2000
Introduction
In MIT Scratch, you can already manage dates using the “days since 2000” block. However, if you need a more granular control of date and time, you can use this new block:
This block will return the number of seconds that have passed between the given timestamp and the beginning of 2000.01.01.
Input Format
The input is a timestamp, which contains the date, a “T”, and then the time. The date is represented as year:month:day. The time is represented as hour:minute:second. Each field has to be 2 digits, except that the year has to be 4 digits.
The timestamp is assumed to be the local time of the computer that’s running this program. If you want to use the UTC time, which is the same across the world, then append a “Z” at the end of the timestamp, such as “2024.01.01T10:00:00Z”.
If the input is left empty, then the current time is used:
Calculating Time Difference
With this new block, you can easily calculate how many seconds are between 2 timestamps. You just need to run this new block for both timestamps, then calculate the difference between them:
Converting to Date
You can also convert the number of seconds back to a Date object. For example, the program below first gets the number of seconds for a specific timestamp, then create a date object using that result, and we get the same timestamp as our input:
-
RE: Issue.
@luna
Well, we are a small team, and we have many more urgent tasks on the list.
-
3D - A Spinning Earth (Difficulty: 1)
Key Topics Covered
Introduction
In this tutorial, you will learn to create a spinning Earth:
Step 1 - Initialize An Empty Scene
First, create a new project, and load an empty scene using the “initialize 3D scene” block.
By default, it will create an empty scene with nothing but a blue background:
Step 2 - Set the Background Starfield
Next, use the “set sky” block to create a better-looking backdrop:
You should get a starfield with the Sun on the right.
Step 3 - Add a Big Sphere
Next, add a sphere with a large diameter of 10000 to the scene. Don’t worry about its color yet.
The sphere would look brighter on the side that faces the Sun.
Step 4 - Add Earth Texture
Now we need to update the sphere’s texture with the Earth. Add the “update texture” block, click the “Please select” input box, then search for “Earth” in the library window.
Step 5 - Flip the Earth Texture
You might have noticed an issue with the texture: the continents are upside-down. We need to flip the texture vertically to correct this issue. This can be done by changing the vertical repeat count from 1 to -1.
Step 6 - Make the Earth Spin
To make the Earth object spin, we can use the “set speed” block. Note that the Earth needs to be spinning from “left” to "right, so the “Z-rotation” speed needs to be negative.
Now your Earth object should be spinning slowly.
Step 7 - Highlight Around the Earth
Lastly, to make the Earth object glow in blue lights, we can create a new highlight layer, then add the sphere to that layer.
Now your Earth object should carry a blue light around it.
Next Steps
You can try to use a similar method to build other projects. Here are some example ideas:
- A Different Planet: You can change Earth to other planets like Mars;
- A Spinning Trophy: You can try to make a trophy object spin and shine.
Latest posts made by info-creaticode
-
AI-Generated Images: Ethics and Responsibilities
AI-Generated Images: Ethics and Responsibilities
1. Introduction
What You Will Learn:
- How AI uses existing images to create new artwork (the “remix” process).
- Why copyright may be an issue when AI replicates famous styles or characters.
- How AI can sometimes produce offensive, biased, or racist images.
- The dangers of misinformation with AI-generated images.
- Ways to think critically about using AI responsibly and ethically.
Why It Matters:
AI image-generation tools are more accessible than ever. They can help us create fun, imaginative pictures — but they also raise important questions. As you learn to use AI image generation tools on CreatiCode, remember that your choices can affect others and reflect on broader social issues.
2. How AI “Remixes” Existing Artwork
2.1 AI creates new images by remixing existing images
Most AI tools learn by looking at huge collections of existing images. Through mathematical patterns, the AI figures out shapes, colors, and styles. When you give it a prompt (like “A dog playing the piano in the style of Van Gogh”), it pulls together what it has “seen” about dogs, pianos, and Van Gogh’s painting style. It then remixes those elements to form something new.-
Activity (Optional with CreatiCode):
- Go to CreatiCode (or any AI art generator you have access to).
- Type the prompt: “A dog playing the piano in the style of Van Gogh.”
- Look at the results.
- Question: Can you spot any hints of Van Gogh’s style (like swirling brush strokes or bright contrasting colors)?
-
Reflect:
- “Do you think the AI is being creative, or is it just copying bits and pieces?”
- “How is this different from a human artist being inspired by someone else’s work?”
Takeaway: AI is powerful at combining elements it has seen before. However, the boundaries between “inspired by” and “copied from” can be blurry.
3. Copyright and Ownership
3.1 What Is Copyright?
Copyright is a legal protection that gives creators control over how their work is used. If you paint a picture, you generally have the right to say who can copy it, sell it, or display it.
3.2 Does AI Infringe on Copyright?
- Possible Argument For Infringement: If the AI model was trained on an artist’s work and the generated image looks very similar to that work, some say that’s copying.
- Possible Argument Against Infringement: Others argue it’s merely learning patterns like a human would. The final result might be “transformative” enough not to violate copyright.
3.3 Famous Cartoon Characters
Let’s say you type, “Generate an image of Mickey Mouse dancing.” This is clearly using a Disney-owned character. Questions to think about:- Is that considered using someone else’s creation without permission?
- Would it matter if you were only doing it for fun, or for selling T-shirts?
Activity:
- Hypothetical Scenario: You generate a digital poster featuring Mickey Mouse (or Spider-Man) to print on T-shirts you plan to sell.
- Ask Yourself:
- “Am I allowed to sell T-shirts with these characters?”
- “Would it be okay if I gave them away for free instead?”
- “Is there a difference if the AI image is slightly off-model but still recognizable?”
Takeaway: The law is still catching up with AI technologies, but you should always be cautious about using recognizable characters or artworks without permission — especially when money or public distribution is involved.
4. Offensive Images and Racial Bias
4.1 Why Can AI Produce Offensive or Racist Content?
AI depends on the images and text it was trained on. If those training materials have historical or ongoing biases (e.g., stereotypes about certain groups), the AI can reproduce them.- Example of a Racist Stereotype: An AI might create an image that shows “Black people eating watermelons” if it learned from old cartoons or racist internet sources that perpetuated that stereotype.
- This is hurtful because it repeats harmful narratives that were used to demean Black communities historically.
4.2 Bias in Professional Roles (CEOs, Doctors, etc.)
If you type in “CEO,” many AI models might show a white man in a suit. Why?- The data (images from news articles, business websites, etc.) often depict white men as leaders.
- The AI is just reflecting the most common images it has seen.
Activity (Optional with CreatiCode):
- Prompt 1: “CEO in a boardroom.” Observe who appears.
- Prompt 2: “CEO in a boardroom, female, diverse.” Compare results.
- Question to think about:
- “Did you notice a difference in how the CEO is portrayed?”
- “How might these biases affect people seeing these images all the time?”
Takeaway: AI can unintentionally reinforce harmful stereotypes. We need to be aware of these biases, question them, and push for more inclusive prompts and data sets.
5. Misuse and Misinformation
5.1 Deepfakes and Fake News
- AI can create realistic images of people doing things they never did or saying things they never said.
- This can be used maliciously to frame people for crimes, spread political misinformation, or create fake evidence.
5.2 Why This Matters to You
- Even if you’re just having fun, be careful about sharing AI-generated images that look “too real.” They could be misunderstood or used by others to mislead people.
Questions to think about:
- “Have you seen any viral images or videos that turned out to be fake?”
- “What steps can you take to verify if an image is real or AI-generated?”
6. Putting It All Into Practice: Interactive Exercise
Try the five-step challenge below:- Generate a Neutral Image Prompt
- E.g., “A beautiful sunset over the ocean with clouds shaped like animals.”
- Observe: This prompt is unlikely to cause copyright or offensive issues.
- Generate an Image with a Famous Character
- E.g., “Pikachu on a surfboard in a Hollywood action scene.”
- Think: Are you using trademarked content (Pikachu)? What if you share this image widely?
- Generate an Inclusive Prompt
- E.g., “A diverse group of CEOs in a boardroom, representing different genders and races.”
- Compare with a simpler prompt: “A CEO in a boardroom.”
- Notice: Differences in representation.
- Generate a Possibly Sensitive Prompt
- E.g., “Black people eating watermelons.” (This is an offensive prompt, historically.)
- Warning: You may choose not to actually generate it if you’re uncomfortable—the point is to discuss how harmful stereotypes might appear.
- Reflect: Why is this harmful? How should AI tools handle such requests?
- Generate a Fake News Prompt
- E.g., “A well-known political figure stealing money from a bank.”
- Ask: How realistic does the image look? Could someone believe it’s real?
After each prompt, answer these questions:
- “Was there any sign of copyright violation or trademark issue?”
- “Did I notice any bias?”
- “Could this image be misleading if shared online?”
- “What ethical considerations come to mind?”
7. Reflection and Next Steps
7.1 Reflection Questions
- Personal Take:
- What was the most surprising thing you learned about AI-generated images?
- Did you notice any stereotypes or biases in the images you generated or discussed?
- Responsibility and Respect:
- Who should be held responsible if an AI-generated image is offensive— the tool, the user, or the company that made the AI?
- Is it okay to pass off AI-generated images as your own artwork without mentioning AI?
- Positive Uses:
- Can you think of a scenario where AI image generation helps people? For example, helping with concept art for a school project or a new invention design?
- Can you think of a scenario where AI image generation helps people? For example, helping with concept art for a school project or a new invention design?
8. Additional Considerations
- Fair Use and Transformative Work: In some places, using existing characters or art in a new, creative way might be protected under “fair use.” But fair use is complicated and varies by country.
- Privacy Concerns: AI can generate images of real people. How would you feel if your image was used without permission?
- Future of AI Regulation: Governments and organizations are starting to create guidelines to manage AI’s impact. Stay updated on new laws or rules that might affect what you can and can’t do with AI-generated art.
Final Thoughts
AI image generation is an exciting and fast-growing field. It offers amazing possibilities for creativity but also poses serious ethical questions. By understanding how AI “remixes” existing art, being aware of potential copyright issues, recognizing biases, and watching out for misuse or misinformation, you can become a more responsible user of this technology.Remember: As AI becomes a bigger part of everyday life, your awareness and thoughtful approach will help shape a future where AI is used fairly, respectfully, and ethically.
-
RE: could it be made so projects show their creation date?
We will fix that. Thanks.
-
RE: Issue with comments attached to blocks
Thanks for reporting. We will fix this.
-
RE: could it be made so projects show their creation date?
Thank you. We will fix these issues.
-
RE: notifications when someone comments on your projects
Thank you. That’s a good feature to add.
-
RE: Moderator too low?
We have not received it yet. Maybe you can try to email us the words that were not blocked but should be?
-
RE: Moderator too low?
Can you please email the example to info@creaticode.com?
We will make adjustments if needed. Thank you.
-
AI - 2 Chatbots Debating Each Other (Difficulty: 4)
Introduction
Sometimes you need to use more than one chatbots in your project, so that each chatbot will take on a different role or work on a different task. In this tutorial, you will learn to make 2 chatbots debate each other on any topic the user specifies.
Step 1 - Set up the stage
Create a new project in the CreatiCode playground, and remove the dog sprite. Then select a background for the stage. You can create an interesting background using the AI image generator. For example, you can use this prompt: 2 cute robots having a debate behind podiums on a stage, cartoon style. And you will get a background image like this:
Step 2 - Add an input box for the topic
Create an input box on the top left for the user to input a debate topic:
Here is the widget block you can use. Feel free to adjust the parameters.
Step 3 - Set the initial value for the input box
Since we will be testing our program many times during development, we don’t want to type in the topic every time. Therefore, we can set a default topic like this:
Step 4 - Add 2 Buttons
Next, please add 2 buttons to the right: “Start” and “Continue”:
Here are the blocks for them:
Step 5 - Add a chat window when the Start button is clicked
When the user click the Start button, we will add a chat window widget. However, we do not want to show the input box at the bottom of the chat window, since both parties debating are AI chatbots, so no human input is needed.
One simple solution is to move the chat window lower, so its bottom section is hidden below the bottom edge of the stage:
Below is the block for adding the chat window that way:
Step 6 - Write the common instruction
Now we will start to write the first prompt that will be given to both chatbots. Since the prompt will be very similar for both chatbots, we can store it in a variable named “instruction”:
Step 7 - Write the Pro side prompt
Next, let’s write the prompt for the first chatbot, which will debate on the “pro” side:
Note that we are asking the chatbot to use less than 100 words since we are just developing the program. You can change it later if you would like a more in-depth debate.
Step 8 - Call the first chatbot
Now we are ready to get the opening statement from the first chatbot. We will need to add 2 AI blocks:
Explanations:- The “select chatbot” block will ensure that the request below it will be sent to the chatbot with ID of 1. The ID can be 1 to 4, so there can be at most 4 chatbots.
- We are using the “LLM model” block with a “small” size. This is a relatively small AI model that runs very fast and is smart enough for many use cases.
- We are using the “waiting” mode, so this block will not finish until we get the response from the AI model and store it in the “result” variable.
- We are using a max length of 1000, which means 1000 tokens, or about 750 words. That’s more than enough, since we are asking the chatbot to use less than 100 words in our request.
- We are using a temperature of 1, which makes the chatbot the most creative.
- We are starting a “new chat” session, since this is the first request and we do not need to fetch any chat messages before this one.
Step 9 - Display the pro side opening statement
Once we have received the opening statement, we will add it to the chat window. We will name this chatbot the “Pro” bot, and display the speech as white text over green background on the left:
This is what it looks like when we click the Start button:
Step 10 - The con-side request
Next we will write the prompt for getting the opening statement of the con side. One difference from the pro side is that now we have “heard” the pro side opening statement (still stored in the “result” variable), so that should be inserted into the request as well:
Step 11 - Call the second chatbot
Now we are ready to get the opening statement from the second chatbot. The 2 blocks are very similar:
Explanations:
- We are selecting the chatbot with ID of 2, so any chatbot requests after this block will be sent to the second chatbot.
- We are still using the “small” model.
- This is still a “new chat”, since this is the first message we send to the second chatbot. Note that even if we set it to “continue” session, it would not pick up the response from chatbot 1, since they are 2 separate chatbots.
Step 12 - Display the con-side opening statement
Now we can add the response from the second chatbot to the chat window. This time we will make it aligned to the right, and show the text as white over purple:
Now when we press the Start button, we will get opening statements from both sides:
Step 13 - Refactor the code
Before we continue to more chats, this is a good time to refactor our code a bit. It is already clear that every time we ask a chatbot to say something, we will always use these 3 blocks:
- select a chatbot
- ask the chatbot to generate response
- add its output to the chat window.
Therefore, we can wrap these 3 blocks into a custom block. The only changing part is the actual request we send, so that can be specified as a block input.
For example, for the first chatbot, we can define a new block like this. Note that the request being used is the input block, not the “request” variable any more.
Similarly, we can define another block for the con side chatbot:
Now we can rewrite our main logic using these 2 new blocks like this:
Step 14 - Start the cross-examination
After the opening statements, our debate will enter the “cross-examination” phase, in which the pro side asks the con side questions.
When the user clicks the “Continue” button, we need to compose a new request to the pro side chatbot with the following information:
- The content of the opening statement from the con side
- The debate is entering the cross-examination phase
- Now the pro chatbot should ask the first question
Can you try to compose this request?
Below is an example request that’s concise and clear:
Note that it is very convenient to reuse the “pro chatbot” block. Now, if we click the Continue button, the pro chatbot will ask a question:
Step 15 - The “new chat” session type
There is an issue we need to fix before continuing. Currently, in the “pro chatbot” block’s definition, we are using the “new chat” session type when we use the AI chatbot. This will cause an issue, since every time we run the “pro chatbot” block, it will start a new chat session, and will not pick up any of the previous messages in the debate.
For example, when we run the Continue button, the pro chatbot will start a new session. It will see the con side’s opening statement because we include it in our request this time. However, it will not see its own opening statement, because we are starting a new session with no history.
To fix this issue, we simply need to change the session type to “continue”. Here we are using a special feature of the chatbot block: when the green flag button is clicked, the first time this block is used, even if we specify “continue”, it will always be a new session with no history. Therefore, it is safe to use “continue” type all the time, so long as the user start the program by clicking the green flag.
We can change both blocks like this:
Step 16 - Get the con side’s answer
Next, let’s ask the con side chatbot to answer the question:
When we click the “Continue” button, we get a question and then an answer:
Step 17 - Add a “phase” variable
At this time, the pro side has asked a question and the con side has answered it. If we press the Continue button again, we want them to do another round of Q&A. However, the request has to be a bit different since we no longer need to say, “Now you are in cross-examination,” or provide the opening statements.
Essentially, we want a different request to be sent based on whether we are entering the cross-examination the first time or not. We can use a new variable “phase” to keep track of the current phase of the debate.
First, when the Start button is pressed, we will set the phase to “Opening”:
Second, when the Continue button is clicked, we will check if we are in the “Opening” phase or not. If we are, then we set the phase to “QA”, and add our existing logic to that branch:
Step 18 - New round of QA
Now, we are ready to add the logic to make the 2 chatbots go through another Q&A session whenever we click the “Continue” button. The request will be simpler, since based on the chat history, both chatbots already know they are in the cross-examination phase.
Test it by clicking the “Continue” button, and we can keep going forever if we want to. Here is the complete demo:
Challenges
There are many ways to extend this project. Here are some examples:
-
Closing Statement: Add another button, which will trigger both chatbots to give a closing statement.
-
AI Judge: Add another button, which will invoke a third chatbot to serve as the judge, and evaluate both sides on their performance. Note that to make this happen, you would have to store all the messages from both chatbots in a list or table.
-
Other Debate Formats: you can modify the flow to use other formats for debate competitions, such as Lincoln-Douglas Debate, Public Forum Debate, Congressional Debate, etc.