Sign Up to our social questions and Answers Engine to ask questions, answer people’s questions, and connect with other people.
Login to our social questions & Answers Engine to ask questions answer people’s questions & connect with other people.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
What is difference between CrudRepository and JpaRepository interfaces in Spring Data JPA?
Basics The base interface you choose for your repository has two main purposes. First, you allow the Spring Data repository infrastructure to find your interface and trigger the proxy creation so that you inject instances of the interface into clients. The second purpose is to pull in as much functiRead more
Basics
The base interface you choose for your repository has two main purposes. First, you allow the Spring Data repository infrastructure to find your interface and trigger the proxy creation so that you inject instances of the interface into clients. The second purpose is to pull in as much functionality as needed into the interface without having to declare extra methods.
The common interfaces
The Spring Data core library ships with two base interfaces that expose a dedicated set of functionalities:
CrudRepository
– CRUD methodsPagingAndSortingRepository
– methods for pagination and sorting (extendsCrudRepository
)Store-specific interfaces
The individual store modules (e.g. for JPA or MongoDB) expose store-specific extensions of these base interfaces to allow access to store-specific functionality like flushing or dedicated batching that take some store specifics into account. An example for this is
deleteInBatch(…)
ofJpaRepository
which is different fromdelete(…)
as it uses a query to delete the given entities which is more performant but comes with the side effect of not triggering the JPA-defined cascades (as the spec defines it).We generally recommend not to use these base interfaces as they expose the underlying persistence technology to the clients and thus tighten the coupling between them and the repository. Plus, you get a bit away from the original definition of a repository which is basically “a collection of entities”. So if you can, stay with
PagingAndSortingRepository
.Custom repository base interfaces
The downside of directly depending on one of the provided base interfaces is two-fold. Both of them might be considered as theoretical but I think they’re important to be aware of:
Page
orPageable
in your code anyway. Spring Data is not any different from any other general purpose library like commons-lang or Guava. As long as it provides reasonable benefit, it’s just fine.CrudRepository
, you expose a complete set of persistence method at once. This is probably fine in most circumstances as well but you might run into situations where you’d like to gain more fine-grained control over the methods expose, e.g. to create aReadOnlyRepository
that doesn’t include thesave(…)
anddelete(…)
methods ofCrudRepository
.The solution to both of these downsides is to craft your own base repository interface or even a set of them. In a lot of applications I have seen something like this:
The first repository interface is some general purpose base interface that actually only fixes point 1 but also ties the ID type to be
See lessLong
for consistency. The second interface usually has all thefind…(…)
methods copied fromCrudRepository
andPagingAndSortingRepository
but does not expose the manipulating ones. Read more on that approach in the reference documentation.What’s the difference between @Component, @Repository and @Service annotations in Spring?
We'll here focus on some minor differences among them. First the Similarity First point worth highlighting again is that with respect to scan-auto-detection and dependency injection for BeanDefinition all these annotations (viz., @Component, @Service, @Repository, @Controller) are the same. We can uRead more
We’ll here focus on some minor differences among them.
Differences between @Component, @Repository, @Controller and @Service
This is a general-purpose stereotype annotation indicating that the class is a spring component.
What’s special about @Component
<context:component-scan>
only scans@Component
and does not look for@Controller
,@Service
and@Repository
in general. They are scanned because they themselves are annotated with@Component
.Just take a look at
@Controller
,@Service
and@Repository
annotation definitions:Thus, it’s not wrong to say that
@Controller
,@Service
and@Repository
are special types of@Component
annotation.<context:component-scan>
picks them up and registers their following classes as beans, just as if they were annotated with@Component
.Special type annotations are also scanned, because they themselves are annotated with
@Component
annotation, which means they are also@Component
s. If we define our own custom annotation and annotate it with@Component
, it will also get scanned with<context:component-scan>
This is to indicate that the class defines a data repository.
What’s special about @Repository?
In addition to pointing out, that this is an Annotation based Configuration,
@Repository
’s job is to catch platform specific exceptions and re-throw them as one of Spring’s unified unchecked exception. For this, we’re provided withPersistenceExceptionTranslationPostProcessor
, that we are required to add in our Spring’s application context like this:This bean post processor adds an advisor to any bean that’s annotated with
@Repository
so that any platform-specific exceptions are caught and then re-thrown as one of Spring’s unchecked data access exceptions.The
@Controller
annotation indicates that a particular class serves the role of a controller. The@Controller
annotation acts as a stereotype for the annotated class, indicating its role.What’s special about @Controller?
We cannot switch this annotation with any other like
@Service
or@Repository
, even though they look same. The dispatcher scans the classes annotated with@Controller
and detects methods annotated with@RequestMapping
annotations within them. We can use@RequestMapping
on/in only those methods whose classes are annotated with@Controller
and it will NOT work with@Component
,@Service
,@Repository
etc…Note: If a class is already registered as a bean through any alternate method, like through
@Bean
or through@Component
,@Service
etc… annotations, then@RequestMapping
can be picked if the class is also annotated with@RequestMapping
annotation. But that’s a different scenario.@Service
beans hold the business logic and call methods in the repository layer.What’s special about @Service?
Apart from the fact that it’s used to indicate, that it’s holding the business logic, there’s nothing else noticeable in this annotation; but who knows, Spring may add some additional exceptional in future.
Similar to above, in the future Spring may add special functionalities for
See less@Service
,@Controller
and@Repository
based on their layering conventions. Hence, it’s always a good idea to respect the convention and use it in line with layers.How to split a list into equally-sized chunks in Python?
Here's a generator that yields evenly-sized chunks: def chunks(lst, n): """Yield successive n-sized chunks from lst.""" for i in range(0, len(lst), n): yield lst[i:i + n] import pprint pprint.pprint(list(chunks(range(10, 75), 10))) [[10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25,Read more
Here’s a generator that yields evenly-sized chunks:
For Python 2, using
xrange
instead ofrange
:Below is a list comprehension one-liner. The method above is preferable, though, since using named functions makes code easier to understand. For Python 3:
For Python 2:
See lessWhen and How to use GraphQL with microservice architecture?
Definitely approach #1. Having your clients talk to multiple GraphQL services (as in approach #2) entirely defeats the purpose of using GraphQL in the first place, which is to provide a schema over your entire application data to allow fetching it in a single roundtrip. Having a shared nothing archiRead more
Definitely approach #1.
Having your clients talk to multiple GraphQL services (as in approach #2) entirely defeats the purpose of using GraphQL in the first place, which is to provide a schema over your entire application data to allow fetching it in a single roundtrip.
Having a shared nothing architecture might seem reasonable from the microservices perspective, but for your client-side code it is an absolute nightmare, because every time you change one of your microservices, you have to update all of your clients. You will definitely regret that.
GraphQL and microservices are a perfect fit, because GraphQL hides the fact that you have a microservice architecture from the clients. From a backend perspective, you want to split everything into microservices, but from a frontend perspective, you would like all your data to come from a single API. Using GraphQL is the best way I know of that lets you do both. It lets you split up your backend into microservices, while still providing a single API to all your application, and allowing joins across data from different services.
If you don’t want to use REST for your microservices, you can of course have each of them have its own GraphQL API, but you should still have an API gateway. The reason people use API gateways is to make it more manageable to call microservices from client applications, not because it fits well into the microservices pattern.
See lessHow to make good reproducible pandas examples?
The Good: Do include a small example DataFrame, either as runnable code: In [1]: df = pd.DataFrame([[1, 2], [1, 3], [4, 6]], columns=['A', 'B']) or make it "copy and pasteable" using pd.read_clipboard(sep=r'\s\s+'). In [2]: df Out[2]: A B 0 1 2 1 1 3 2 4 6 Test it yourself to make sure it works andRead more
The Good:
or make it “copy and pasteable” using
pd.read_clipboard(sep=r'\s\s+')
.Test it yourself to make sure it works and reproduces the issue.
df = df.head()
? If not, fiddle around to see if you can make up a small DataFrame which exhibits the issue you are facing.But every rule has an exception, the obvious one being for performance issues (in which case definitely use
%timeit
and possibly%prun
to profile your code), where you should generate:Consider using
np.random.seed
so we have the exact same frame. Having said that, “make this code fast for me” is not strictly on topic for the site.df.to_dict
is often useful, with the differentorient
options for different cases. In the example above, I could have grabbed the data and columns fromdf.to_dict('split')
.Explain where the numbers come from:
But say what’s incorrect:
Aside: the answer here is to use
df.groupby('A', as_index=False).sum()
.pd.to_datetime
to them for good measure.Sometimes this is the issue itself: they were strings.
The Bad:
The correct way is to include an ordinary DataFrame with a
set_index
call:Be specific about how you got the numbers (what are they)… double check they’re correct.
On that note, you might also want to include the version of Python, your OS, and any other libraries. You could use
pd.show_versions()
or thesession_info
package (which shows loaded libraries and Jupyter/IPython environment).The Ugly:
Most data is proprietary, we get that. Make up similar data and see if you can reproduce the problem (something small).
Essays are bad; it’s easier with small examples.
Please, we see enough of this in our day jobs. We want to help, but not like this…. Cut the intro, and just show the relevant DataFrames (or small versions of them) in the step which is causing you trouble.
How Slicing in Python works?
The syntax is: a[start:stop] # items start through stop-1 a[start:] # items start through the rest of the array a[:stop] # items from the beginning through stop-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:stop:step] # start througRead more
The syntax is:
There is also the
step
value, which can be used with any of the above:The key point to remember is that the
:stop
value represents the first value that is not in the selected slice. So, the difference betweenstop
andstart
is the number of elements selected (ifstep
is 1, the default).The other feature is that
start
orstop
may be a negative number, which means it counts from the end of the array instead of the beginning. So:Similarly,
step
may be a negative number:Python is kind to the programmer if there are fewer items than you ask for. For example, if you ask for
a[:-2]
anda
only contains one element, you get an empty list instead of an error. Sometimes you would prefer the error, so you have to be aware that this may happen.Relationship with the
slice
objectA
slice
object can represent a slicing operation, i.e.:is equivalent to:
Slice objects also behave slightly differently depending on the number of arguments, similar to
range()
, i.e. bothslice(stop)
andslice(start, stop[, step])
are supported. To skip specifying a given argument, one might useNone
, so that e.g.a[start:]
is equivalent toa[slice(start, None)]
ora[::-1]
is equivalent toa[slice(None, None, -1)]
.While the
See less:
-based notation is very helpful for simple slicing, the explicit use ofslice()
objects simplifies the programmatic generation of slicing.How to create pivot table in mysql?
Many people just use a tool like MSExcel, OpenOffice or other spreadsheet-tools for this purpose. This is a valid solution, just copy the data over there and use the tools the GUI offer to solve this. But... this wasn't the question, and it might even lead to some disadvantages, like how to get theRead more
Many people just use a tool like MSExcel, OpenOffice or other spreadsheet-tools for this purpose. This is a valid solution, just copy the data over there and use the tools the GUI offer to solve this.
But… this wasn’t the question, and it might even lead to some disadvantages, like how to get the data into the spreadsheet, problematic scaling and so on.
The SQL way…
Given his table looks something like this:
Now look into his/her desired table:
The rows (
EMAIL
,PRINT x pages
) resemble conditions. The main grouping is bycompany_name
.In order to set up the conditions this rather shouts for using the
CASE
-statement. In order to group by something, well, use …GROUP BY
.The basic SQL providing this pivot can look something like this:
This should provide the desired result very fast. The major downside for this approach, the more rows you want in your pivot table, the more conditions you need to define in your SQL statement.
This can be dealt with, too, therefore people tend to use prepared statements, routines, counters and such.
Some additional links about this topic:
How to prevent SQL injection in PHP?
The correct way to avoid SQL injection attacks, no matter which database you use, is to separate the data from SQL, so that data stays data and will never be interpreted as commands by the SQL parser. It is possible to create an SQL statement with correctly formatted data parts, but if you don't fulRead more
The correct way to avoid SQL injection attacks, no matter which database you use, is to separate the data from SQL, so that data stays data and will never be interpreted as commands by the SQL parser. It is possible to create an SQL statement with correctly formatted data parts, but if you don’t fully understand the details, you should always use prepared statements and parameterized queries. These are SQL statements that are sent to and parsed by the database server separately from any parameters. This way it is impossible for an attacker to inject malicious SQL.
You basically have two options to achieve this:
Since PHP 8.2+ we can make use of
execute_query()
which prepares, binds parameters, and executes SQL statement in one method:Up to PHP8.1:
If you’re connecting to a database other than MySQL, there is a driver-specific second option that you can refer to (for example,
pg_prepare()
andpg_execute()
for PostgreSQL). PDO is the universal option.Correctly setting up the connection
PDO
Note that when using PDO to access a MySQL database real prepared statements are not used by default. To fix this you have to disable the emulation of prepared statements. An example of creating a connection using PDO is:
In the above example, the error mode isn’t strictly necessary, but it is advised to add it. This way PDO will inform you of all MySQL errors by means of throwing the
PDOException
.What is mandatory, however, is the first
setAttribute()
line, which tells PDO to disable emulated prepared statements and use real prepared statements. This makes sure the statement and the values aren’t parsed by PHP before sending it to the MySQL server (giving a possible attacker no chance to inject malicious SQL).Although you can set the
charset
in the options of the constructor, it’s important to note that ‘older’ versions of PHP (before 5.3.6) silently ignored the charset parameter in the DSN.Mysqli
For mysqli we have to follow the same routine:
Explanation
The SQL statement you pass to
prepare
is parsed and compiled by the database server. By specifying parameters (either a?
or a named parameter like:name
in the example above) you tell the database engine where you want to filter on. Then when you callexecute
, the prepared statement is combined with the parameter values you specify.The important thing here is that the parameter values are combined with the compiled statement, not an SQL string. SQL injection works by tricking the script into including malicious strings when it creates SQL to send to the database. So by sending the actual SQL separately from the parameters, you limit the risk of ending up with something you didn’t intend.
Any parameters you send when using a prepared statement will just be treated as strings (although the database engine may do some optimization so parameters may end up as numbers too, of course). In the example above, if the
$name
variable contains'Sarah'; DELETE FROM employees
the result would simply be a search for the string"'Sarah'; DELETE FROM employees"
, and you will not end up with an empty table.Another benefit of using prepared statements is that if you execute the same statement many times in the same session it will only be parsed and compiled once, giving you some speed gains.
Oh, and since you asked about how to do it for an insert, here’s an example (using PDO):
Can prepared statements be used for dynamic queries?
While you can still use prepared statements for the query parameters, the structure of the dynamic query itself cannot be parametrized and certain query features cannot be parametrized.
For these specific scenarios, the best thing to do is use a whitelist filter that restricts the possible values.